Hide table of contents

The New York Times

Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday.

A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of those claims, though their principles compel them to avoid threatening any form of legal action. The Times unconditionally refuses, claiming it must meet a hard deadline. The day before publication, Scott Alexander gets his hands on a copy of the article and informs the Times that it's full of provable falsehoods. They correct one of his claims, but tell him it's too late to fix another.

The final article comes out. It states openly that it's not aiming to be a balanced view, but to provide a deep dive into the worst of EA so people can judge for themselves. It contains lurid and alarming claims about Effective Altruists, paired with a section of responses based on its conversation with EA that it says provides a view of the EA perspective that CEA agreed was a good summary. In the end, it warns people that EA is a destructive movement likely to chew up and spit out young people hoping to do good.

In the comments, the overwhelming majority of readers thank it for providing such thorough journalism. Readers broadly agree that waiting to review CEA's further claims was clearly unnecessary. David Gerard pops in to provide more harrowing stories. Scott gets a polite but skeptical hearing out as he shares his story of what happened, and one enterprising EA shares hard evidence of one error in the article to a mixed and mostly hostile audience. A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future.

This is not an essay about the New York Times.

The rationalist and EA communities tend to feel a certain way about the New York Times. Adamantly a certain way. Emphatically a certain way, even. I can't say my sentiment is terribly different—in fact, even when I have positive things to say about the New York Times, Scott has a way of saying them more elegantly, as in The Media Very Rarely Lies.

That essay segues neatly into my next statement, one I never imagined I would make:

You are very very lucky the New York Times does not cover you the way you cover you.

A Word of Introduction

Since this is my first post here, I owe you a brief introduction. I am a friendly critic of EA who would join you were it not for my irreconcilable differences in fundamental values and thinks you are, by and large, one of the most pleasant and well-meaning groups of people in the world. I spend much more time in the ACX sphere or around its more esoteric descendants and know more than anyone ought about its history and occasional drama. Some of you know me from my adversarial collaboration in Scott's contest some years ago, others from my misadventures in "speedrunning" college, still others from my exhaustively detailed deep dives into obscure subculture drama (sometimes in connection with my job).

The last, I'm afraid, is why I'm here this time around—I wish we were meeting on better terms. I saw a certain malcontent[1] complaining that his abrasiveness was poorly received, stopped by to see what he was on about, and got sucked in—as one is—by every word of the blow-by-blow fighting between two companies I knew nothing about in an ecosystem where I am a neighbor but certainly not a member. I came to this fresh: never having heard of @Ben Pace, @Habryka, or Nonlinear, having about as much knowledge of EA as any outsider can have while having no ties to its in-person community, and with the massive benefit of hindsight in being able to read side-by-side what active EA forum users read three months apart. I pursued it out of sheer fascination when I should have been studying for my Civil Procedure final, entranced by a saga that would not leave my mind.

What precisely do I think of Nonlinear, a group I had never heard of prior to a few days ago? More-or-less what my friends think, really—credit them for the bulk of the following description. It sounds like a minor celebrity got comfortably rich young, dove into the same fascinating online ecosystem we all did, and decided to spend his retirement with his partner (who has an impressive history of dedication to charity) and brother scratching his itch to be productive by traveling the world and doing charity via talking with cool, smart people about meaningful ideas. It sounds like they hired someone who imagined doing charity work but instead lived a life more akin to that of a live-in assistant to a celebrity, picked up another traveling-partner-turned-employee with a long history of tumultuous encounters, and had a lot of very predictable drama of the sort that happens when young people live as roommates and traveling partners with their bosses.

From there, the ex-employees, disillusioned and burnt out, began spreading allegations that toed and sometimes crossed the line between "exaggerated" and "fabricated", and the founders learned an important lesson about mixing work and pleasure, one that soon turned into the much crueller lesson of what it feels like to be sewn inside a punching bag and dangled in front of your tight-knit community. They made a major unforced tactical error in taking so long to respond and another in not writing in the right sort of measured, precise tone that would have allowed them to defuse many criticisms. They were also unambiguously, inarguably, and severely wronged by the EA/LessWrong (LW) community as a whole.

What about Lightcone, a group I quickly realized maintains LessWrong, the ancestral home of my people? I'm grateful they've maintained a community that has inspired me and so many people like me. I get the sense that they're earnest, principled, precise thinkers who care deeply about ethical behavior. I've learned they recently faced the severe blow of watching a trusted community member be revealed as the fraud to end all frauds while feeling like there was something they could have done. I think they met earnest people who talked about feeling hurt and genuinely wanted to help to the best of their ability. And I wish I'd built up sufficient social capital with them to allow it to feel like a relationship of trust rather than the intrusion of a hostile stranger when I say they wrote one of the most careless, irresponsible, destructive callout articles I have ever had the displeasure of reading—one they seem to continue to be in denial about.

In a sense, though, I think they should be thanked for it, because the community reaction to their article indicates it was not just them. I follow drama and blow-ups in a lot of different subcultures. It's my job. The response I saw from the EA and LessWrong communities to their article was thoroughly ordinary as far as subculture pile-ons go, even commendable in ways. Here's the trouble: the ways it was ordinary are the ways it aspires to be extraordinary, and as the community walked headlong into every pitfall of rumormongering and dogpiles, it did so while explaining at every step how reasonable, charitable, and prudent it was in doing so.

The Story So Far: A Recap

Starting in mid-2022, two disgruntled former Nonlinear employees, referred to by the pseudonyms Alice and Chloe, began to spread rumors about the misery of their time there. They told these rumors to many people within the EA community, including CEA, requesting that CEA not tell Nonlinear about any of their complaints and pushing for unspecified action against the organization. CEA discussed the possibility of the former employees writing a public post, but they were unwilling to do so. In November 2022, someone made an anonymous post spreading vague rumors about the same. As more rumors spread, some organizations within EA began to restrict Nonlinear's opportunities in the EA space, such as CEA not inviting them to present at conferences.

Ben Pace, who managed a community hub called the Lightcone offices, heard these rumors when Kat Woods and Drew Spartz of Nonlinear applied to visit the offices in early 2023, and told them he was concerned about them but still allowed a visit. Dissatisfied with Kat's explanations when he chatted with her, he began to investigate further, spending several hundred hours over six months looking for all negative information he could find about Nonlinear (centering around the experiences of those two former employees) via interviews and investigative research. Others in the Lightcone office participated in this process, with Oliver Habryka reporting the office as a whole spent close to a thousand hours on it. In collaboration with their sources, they set a publication date for an exposé about Nonlinear.

Less than a week before the publication date, Ben informed Nonlinear that he had been digging into them with intent to publish an exposé and sent them a list of concerns. Around 60 hours before publication, Ben had a three-hour phone call with the Nonlinear cofounders about those concerns in which they told him his list contained a number of exaggerations and fabrications. Nonlinear requested a week to compile and present evidence against these claimed fabrications, which Ben and Oliver rejected. The day before publication, longtime community member Spencer Greenberg obtained a draft copy of the post and warned Ben and Oliver that it contained a number of falsehoods. Ben edited some, but when Spencer sent him message records contesting one claim in the post two hours before publication, Lightcone concluded it was too late to change and that the post must release on schedule. During the few days before publication and in particular after seeing a draft copy of the post, the Nonlinear founders grew increasingly urgent and aggressive in their messages, eventually threatening to sue Lightcone for defamation if they released the post without taking another week to investigate Nonlinear's evidence. Lightcone refused.

Ben released the post on September 7th to the EA/LW communities, where it was widely circulated and supported, including by CEA's Community Health team.[2] After publishing the post, he paid Alice and Chloe $5,000 each. Kat shared screenshots contesting one of the post's claims in the comments section and Nonlinear promised a comprehensive reply as soon as possible. On September 15th, Ben released a postmortem sharing further thoughts on Nonlinear and concluding that the CEA Community Health team was not doing enough to police the EA ecosystem. Nonlinear stayed mostly quiet until December 12th, when they released an in-depth post contesting the bulk of the claims in the exposé.

On December 13th, I heard about this sequence of events and the players involved for the first time.

Avoidable, Unambiguous Falsehoods in "Sharing Information About Nonlinear"

If you have a strong stake in Nonlinear's reputation, I encourage you to read their full response, including the appendix. Here, I will aim towards something simpler: documenting some of the standout times Ben made claims easily and unambiguously contested by primary sources from Nonlinear, mostly about situations that occurred when Alice and Chloe were traveling with them, claims that could and should have been fixed with a modicum of effort. Each subsection that follows will begin with a direct pull quote from Ben’s article and follow with my summary of the evidence Nonlinear provides rebutting it, with sources and specific screenshots in footnotes.

"My current understanding is that they’ve had around ~4 remote interns, 1 remote employee, and 2 in-person employees (Alice and Chloe). Alice was the only person to go through their incubator program."

Nonlinear has had 21 employees, including five other incubatees. This is a low-importance claim, but it's illustrative. Clarifying with Nonlinear, who was not only willing to clarify points with them but begging to do so, would have taken no time at all. To avoid fact-checking this demonstrates a low priority for fact-checking in general.[3]

"they were not able to live apart from the family unit while they worked with them" 

Per Nonlinear, Alice lived apart from them for six weeks during her four months of employment. This is a slight exception to my "primary source" rule—verifying whether Alice lived apart for six weeks would take a bit more work than just Nonlinear's word, but it directly contradicts Ben's claim such that publication of the original claim becomes irresponsible.[4]

"Chloe’s salary was verbally agreed to come out to around $75k/year. However, she was only paid $1k/month, and otherwise had many basic things compensated i.e. rent, groceries, travel. This was supposed to make traveling together easier, and supposed to come out to the same salary level." 

Nonlinear clearly explained Chloe's compensation scheme from the beginning and presented it in a clear and unambiguous written contract, which they fulfilled.[5] It was always conceptualized and presented as $1000 a month plus living expenses. She accepted the position knowing its compensation. It's not a level of compensation I'd advise anyone in it for the money to take, but the experience is the sort that many young people, including me, have pursued knowing there's a monetary tradeoff. 

I don't agree with Nonlinear's apparent conception of benefits as functionally equivalent to pay given my experience in comparable situations (the military and a Mormon mission)[6], but Chloe had no serious grounds to complain about salary, and Ben's description of it ignores the actual employment agreement and misrepresents the situation.

"Over her time there she spent through all of her financial runway, and spent a significant portion of her last few months there financially in the red (having more bills and medical expenses than the money in her bank account) in part due to waiting on salary payments from Nonlinear. She eventually quit due to a combination of running exceedingly low on personal funds and wanting financial independence from Nonlinear, and as she quit she gave Nonlinear (on their request) full ownership of the organization that she had otherwise finished incubating." ... "At the time of her quitting she had €700 in her account, which was not enough to cover her bills at the end of the month, and left her quite scared. Though to be clear she was paid back ~€2900 of her outstanding salary by Nonlinear within a week, in part due to her strongly requesting it."

Timed transactions straightforwardly demonstrate that aspects of Alice's claims about waiting for salary payments were false. Kat also explains that the delay in expense reimbursement was because Alice switched from recording in their public reimbursement system to using a private spreadsheet without telling them, and that they reimbursed Alice as soon as she told them. While the document provides no primary source on this, as with the "not allowed to live apart" claim, the counterclaim provides ample reason to either verify more closely or avoid publishing the falsehood.[7]

"One of the central reasons Alice says that she stayed on this long was because she was expecting financial independence with the launch of her incubated project that had $100k allocated to it (fundraised from FTX). In her final month there Kat informed her that while she would work quite independently, they would keep the money in the Nonlinear bank account and she would ask for it, meaning she wouldn’t have the financial independence from them that she had been expecting, and learning this was what caused Alice to quit."

Nonlinear provides two screenshots to support an in-depth narrative that Alice's role was always as a project manager within Nonlinear, that they clarified repeatedly that she was a project manager within Nonlinear, that all of the funding in her project came via Nonlinear, that they would never have simply handed a quarter-million dollars to an untested new organization, and that Alice repeatedly attempted to claim she had a separate organization despite that.[8]

Ben's quoted claim is not technically false: Alice did indeed seem to believe, or claim to believe, that she would get financial independence. It provides a misleading impression, though, to present it without any of the context and primary sources available from Nonlinear.

"Alice quit being vegan while working there. She was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days. Alice eventually gave in and ate non-vegan food in the house. She also said that the Nonlinear cofounders marked her quitting veganism as a ‘win’, as they had been arguing that she should not be vegan."

There was vegan food in the house and they picked food up for her while sick themselves, but on one of the days they wanted to go to a Mexican place with limited vegan options instead of getting a vegan burger from Burger King.[9] "Nobody in the house was willing to go out and get her vegan food" is unambiguously false. Crucially, Ben had sufficient information to know it was false before the time of publication.

"Alice was polyamorous, and she and Drew entered into a casual romantic relationship. Kat previously had a polyamorous marriage that ended in divorce, and is now monogamously partnered with Emerson. Kat reportedly told Alice that she didn't mind polyamory "on the other side of the world”, but couldn't stand it right next to her, and probably either Alice would need to become monogamous or Alice should leave the organization." 

Kat points out that she recommended poly people for Alice to date multiple times, but felt strongly that Alice dating Drew (her colleague, roommate, and the brother of her boss) would be a bad idea. I happen to agree with her reasoning on that front and think subsequent events vindicated her. I find this claim particularly noxious because advising someone in the strongest possible terms against dating their boss's brother, who lives with them, seems from my own angle like a thoroughly sane thing to do. Kat's advice on that front was wholly vindicated.[10]

"Before she went on vacation, Kat requested that Alice bring a variety of illegal drugs across the border for her (some recreational, some for productivity). Alice argued that this would be dangerous for her personally, but Emerson and Kat reportedly argued that it is not dangerous at all and was “absolutely risk-free”. Privately, Drew said that Kat would “love her forever” if she did this."

When you read "bring a variety of illegal drugs across the border [...] (some recreational, some for productivity)," do you think "stop by a pharmacy for ADHD meds"? I do not. It conjures up images of cartels, of back-alley meth deals, of steep danger and serious wrongdoing. For many responding to the original post, this was one of the most severe indicators of wrongdoing. If it had been accurately reported, whatever people think about casual Adderall use, it simply would not have had the same impact.[11] Oliver asserts his belief that more is being covered up here—I have no basis on which to judge this, but if so, it would have been an excellent point for Ben to confirm and present in specific while writing an article on the matter.

Ben and Oliver focus a great deal on the amount of time and effort that went into the post: 100-200 hours per the original post, 320 hours per Ben's postmortem, somewhat over 1000 hours spread over the Lightcone staff per a comment from Oliver. They and the community alike use this time and effort to justify the difficulty of an investigation like this, the impracticality of asking for more, the high standards that went into the investigation, and the lack of need to add any sort of delay.

I believe they spent that time in productive, reasonable ways, but I keep coming back to an inescapable conclusion about it all: You can do a lot of cross-checking of a lot of claims in a thousand hours, but without talking with the people involved, you can do very little to cross-check the core allegations. The bulk of the claims I list above, and the bulk of the claims the community seems to have found most alarming, occurred in times and places where there were precisely five people present.  Ben and Oliver spent a thousand hours diligently avoiding three of those five people while hearing and collecting rumors that they were vile, spent three hours with a publication date already set dumping every allegation on them at once, then flat-out refused to wait so much as a week to allow those three people to compile concrete material evidence against their claims.

They were, in fact, in such a hurry to release that when Spencer Greenberg got a last-minute look at the draft and warned them of serious inconsistencies, they hurriedly adjusted some before pleading lack of time on another and treating an update in the comments section as sufficient. Oliver claims, and I have little reason to contest, that Ben published (almost) nothing he knew was wrong at the time. But they both knew they were receiving information contradicting their claims up until the moment of publication and being promised more of that information shortly.

The errors in this section and in the process that led to it are inexcusable for any published work purporting to be the result of serious investigation. They cannot be said to be either trivial or tangential. These are not the results of a truth-seeking process.

These Issues Were Known and Knowable By Lightcone and the Community. The EA/LW Community Dismissed Them

The original post and the discussion around it contained three glaring red flags:

  • At the top, Ben reminded the community that the bulk of the post came from a search for negative information, not for a complete picture.
  • In the comments, @spencerg, someone with a long history of good faith and fair dealing in the rationalist community, warned that the post contained many false claims, some of which he had warned Ben of immediately before publishing and Ben took half-hearted measures to correct.
  • Also in the comments, @Geoffrey Miller, with his own long history of serious, sincere engagement within the rationalist community, exhorted the community to adhere to the standards of professional investigative journalism—learned from bitter experience—and to be professionally accountable for truth and balance—and warned that the post realistically failed that standard.

The community treated Ben's admission that he had been on a six-month hunt for negative information not as a signal saying "I am writing a slanted hit piece" the way they would if it came from any news organization in the country, but as one of good epistemic hygiene and honesty that would allow them to rationally and accurately update.

Judging by votes, people were somewhat receptive to Spencer and politely heard him out, but they did little to update based on his claims. Oliver's response, claiming that the lawsuit threat was an attempt at intimidation that justified immediate release of all information and that 40 more hours of lost productive time was unreasonable to ask, was overwhelmingly more popular—indeed, about as popular as a response gets in this ecosystem.

Geoffrey's reception was decidedly more mixed. The bulk of the community emphatically rejected Geoffrey's push to heed professional standards, with people claiming that in many cases those standards simply existed to protect the professionals, citing a general distrust for established codes of professions and for the standards of investigative journalism in specific, and claiming those standards set the bar too high for an already thankless task.

In addition, a plurality of the community who voted in @Nathan Young's poll agreed with the decision not to delay posting. 

It is well and good to distrust journalism. I do myself. I confess, though, that in all my time hearing how my spheres criticize journalists, I have never once heard people complain that they work too hard to verify their information, try too hard to be fair to the subjects of their writing, or place too high a premium on truth.

As Geoffrey points out, the crux is "how bad it is to make public, false, potentially damaging claims about people, and the standard of care/evidence required before making those claims."

I can't say this is a crux I expected among rationalists, but here we are.

Oliver claims that Ben's goal with the post was not to judge, but to publish evidence that had been circulating and allow for refutation. That is hard to square with lines like "I expect that if Nonlinear does more hiring in the EA ecosystem it is more-likely-than-not to chew up and spit out other bright-eyed young EAs who want to do good in the world," hard to square with Ben's repeated assertions that claims in his post were credible, and hard to square with the duty you take on by electing to publish an exposé about someone and telling people they can trust it due to the time you put into it and your stature within the community. You have to play the role of judge in a scenario like that.

It's worth examining the code of ethics for the Society of Professional Journalists. A respect for truth as their fundamental aim is written into their first, second, and third principles:

Ethical journalism should be accurate and fair. Journalists should be honest and courageous in gathering, reporting and interpreting information.

Journalists should:

  • Take responsibility for the accuracy of their work. Verify information before releasing it. Use original sources whenever possible.
  • Remember that neither speed nor format excuses inaccuracy.
  • Provide context. Take special care not to misrepresent or oversimplify in promoting, previewing or summarizing a story.

I believe this is a fair, reasonable, and minimal standard for anyone aiming to do investigative work. It is not sufficient to claim epistemic uncertainty when promoting falsehoods, nor is it sufficient to say you are simply amplifying the falsehoods of your sources.

When you amplify someone's claims, you take responsibility for those claims. When you amplify false claims where contradictory evidence is available to you and you decline to investigate that contradictory evidence, you take responsibility for that. People live and die on their reputations, and spreading falsehoods that damage someone's reputation is and should be seen as more than just a minor faux pas.

Ben, so far as I can tell, disputes this standard, holding instead that past a relatively low threshold, unverified allegations should be spread: "I think I'm more likely to say "Hey, I currently assign 25% to <very terrible accusation>" if I have that probability, assigned rather than wait until it's like 90% or something before saying my probability." His response to Nonlinear's rebuttal makes the reasonable-sounding statement that he plans to compare factual claims to those in his piece and update inaccuracies, but a high tolerance for spreading falsehoods is built into his process. Correction is the bare minimum of damage control after spreading damaging falsehoods, not prudence following a pattern of prudence.

Better processes are both possible and necessary

Oliver explicitly disputes the journalistic standard. He asserts that the "approximate result of [the standard I ask] is that [they] would have never been able to publish." When I pushed back, he encouraged me "to talk to any investigative reporter with experience in the field and ask them whether [my] demands here are at all realistic for anyone working in the space."

I agree that they would never have been able to publish a list of unsubstantiated rumors, and consider that a good thing: to quote a friend, a healthy community does not spread rumors about every time someone felt mistreated. But I emphatically disagree that they would never have been able to publish anything at all. I would never think to hold them to a standard I do not hold myself to.

As reassurance, Oliver cites how their investigative efforts are a "vast and far outlier," both in the realm of willingness to pay sources[12] and "on the dimension of gathering contradicting evidence."[13]

He is technically correct: they are indeed an outlier. Just not, unfortunately, in the way he intends.

I am not a journalist. The only time in my life I have been paid to write, or indeed have sought payment for that writing, was in Scott's 2018 Adversarial Collaboration Contest. When I write, I do so in my spare time in quiet corners of the internet, often out of the motivation that only comes when Someone Is Wrong On The Internet and when by all rights I should be doing something else. Some of the topics I focus on read as bizarrely trivial on their face rather than the world-saving EAs prefer to focus on, as with my detailed account of the fall of r/antiwork and the backstory behind a viral moment of a pirate furry hitting someone with a megaphone. We all have our fascinations.

Consider that latter article. The "antagonists" were not particularly communicative, but I reached out to them multiple times, including right before publication, checking if I could ask questions and asking them to review my claims about them for accuracy. I went to the person closest to them who was informed on the situation and got as much information as I could from them. I spent hours talking with my primary sources, the victim and his boyfriend, and collecting as much hard evidence as possible. I spent a long time weighing which points were material and which would just serve to stir up and uncover old drama. Parties claimed I was making major material errors at several points during the process, and I dug into their claims as thoroughly as I could and asked for all available evidence to verify. Often, the disputes they claimed were material hinged on dissatisfaction with framing.

All sources were, mutually, worried about retribution and vitriol from the other parties involved.[14] All sources were part of the same niche subculture spaces, all had interacted many times over the past half-decade, mostly unhappily, and all had complicated, ugly backstories.

From my conclusion to that story:

The obscurity became its own justification. Little tragedies happen all the time and are forgotten by the broader world as quickly as they arise. [...] In the end, I pursued this story for a simple reason: nobody else would. If people are to become outcasts among outcasts, to have their names and faces forever tied to allegations of behavior and beliefs so heinous they justify ostracization and physical assault, the least they deserve is someone willing to tell their story.

I did this in my spare time, of my own initiative, while balancing a full law school schedule. I approached it with care, with seriousness, and with full understanding of the reputational effects I expected it to have and the evidence I had backing and justifying those effects. Writing about someone means taking on a duty to them, particularly if you write to condemn them.

There is no threshold for hours of engagement. The test is accuracy. If you are receiving or seem likely to receive new material facts that contradict elements of your narrative, you are not ready to publish.

I want to pause for a moment on this: I spend hours upon hours verifying obscure trivia in niche stories with minuscule real-world impact. This obsession is hardly a virtue, but the standards of truth-seeking I demand are not too onerous—not for a story about internet nonsense, and certainly not for a controversy that could change the course of lives.

My own credibility is limited by my amateur status and relative inexperience. I'm not an investigative reporter, much as I LARP as one online.[15] Since my job puts me in close proximity to them, though, Oliver and I worked together to write a hypothetical to pose to experienced journalists, in line with his challenge to me, with our opposite expectations preregistered. I don't endorse the hypothetical as a fully accurate summary of what happened, but agreed that it was close enough to get worthwhile answers.

The hypothetical we came up with:

Say you were advising someone on a story they'd been working on for six months aimed at presenting an exposé of a group their sources were confident was doing harm. They'd contacted dozens of people, cross-checked stories, and did extensive independent research over the course of hundreds of hours.

Their sources, who will be anonymous but realistically identifiable in the article, express serious concerns about retribution and request a known-in-advance publication date.

They have talked to the group they are investigating multiple times to gather evidence, but have not informed them that they are planning to release an exposé with the evidence they gathered. 7 days before their scheduled publication date they contact the group and inform them about their intent to publish and the key claims they are planning to include in their exposé.

The group claims that several points in their article are materially wrong and libelous and asks for another week to compile evidence to rebut those claims, growing increasingly frantic as the publication date approaches and escalating to a threat of a libel suit.

On the last day before publication, they show a draft to another person close to the story who points out a detail that does not directly contradict anything in the post, but seems indirectly implied to be false, which they correct in the final publication. Then with two hours to go before the scheduled publication, the same contact provides evidence against one of the statements made in the post, though also does not definitely disprove it.

Would you advise them to publish the article in its current form, or delay publication, despite the credible requests about the sources about retribution and the promise of the scheduled publication date?

I posed that hypothetical as written, with a brief, neutral leadup, to several journalists.[16] Ultimately, I received three answers, two from my bosses and one from Helen Lewis of The Atlantic. I understand if people would prefer to discount the answers from my bosses due to my working relationship with them, but I believe the framing and lack of context positioned all three well to consider the question in the abstract and on the merits independent of any connections. None were aware of the actual story in advance of answering, only the hypothetical as presented, and none of their answers should be taken as positions on the actual sequence of events.

First, from Katie Herzog, who formerly wrote for The Stranger and currently cohosts the podcast Blocked and Reported:

I would delay publication. I’m not sure about the specifics of libel law but putting myself in a publisher’s shoes, they do tend to not want to get sued and your first commitment, beyond getting the scoop or even stopping the hypothetical group from doing harm, should be towards accuracy.

Oliver requested I clarify that the concern is solely ethical responsibility, not lawsuits. When I asked whether it mattered, she responded:

[I]t doesn't, really. [A]ccuracy is paramount under threat of legal action or not.

Second, from Jesse Singal, formerly of NYMag with bylines in many outlets, author of The Quick Fix: Why Fad Psychology Can't Cure Our Social Ills and cohost of Blocked and Reported

I think it depends a lot on the group's ability to provide evidence the investigators' claims are wrong. In a situation like that I would really press them on the specifics. They should be able to provide evidence fairly quickly. You don't want a libel suit but you also don't want to let them indefinitely delay the publication of an article that will be damaging to them. It is a tricky situation! I am not sure an investigative reporter would be able to help much more simply because what you're providing is a pretty vague account, though I totally understand the reasons why that's necessary.

Finally, from The Atlantic's Helen Lewis, former deputy editor of the New Statesman and author of the book "Difficult Women: A history of feminism in 11 fights":

This feels like a good example of why you shouldn’t over-promise to your sources—you want a cordial relationship with them but you need boundaries too. I can definitely see a situation where you would agree to give a source a heads up once you’d decided to publish — if it was a story where they’d recounted a violent incident or sexual assault, or if they needed notice to stay somewhere else or watch out for hacking attempts. But I would be very wary of agreeing in advance when I would publish an investigation—it isn’t done until it’s done.

In the end the story is going out under your name, and you will face the legal and ethical consequences, so you can’t publish until you’re satisfied. If the sources are desperate to make the information public, they can make a statement on social media. Working with a journalist involves a trade-off: in exchange for total control, you get greater credibility, plausible deniability and institutional legal protection. If I wasn’t happy with a story against a ticking clock, I wouldn’t be pressured into publication. That’s a huge risk of libelling the subjects of the piece and trashing your professional reputation.

On the request for more time for right to reply, that’s a judgement call—is this a fair period for the allegations involved, or time wasting? It’s not unknown for journalists to put in a right to reply on serious allegations, and the subject ask for more time, and then try to get ahead of the story by breaking it themselves (by denying it).

You don't even have to look as far as my examples, though. To his credit, Oliver repeatedly asked for better examples of what to do in similar situations. To the credit of the rationalist community, it contains some of those examples. To Oliver's discredit, however, he had full awareness of one better example, as his response to allegations of community misconduct was one of its subjects of investigation.

Last year, a rationalist meetup organizer faced accusations of misconduct, Oliver and his wife Claire (who was in charge of meetup organization as a whole) banned him from an event, he objected, and Claire agreed to be bound by a community investigation. One principle used in that investigation is worth highlighting:

Anyone accused of misconduct should promptly be informed of any accusations made against them and given an opportunity to tell their side of the story, present evidence, and propose witnesses. Emergency preliminary actions should be taken where allegations are sufficiently serious and credible, but the accused should be given an opportunity to defend themselves as quickly as possible.[17]

In the end, the team writing the report highlighted several specific allegations against its primary subject before including a telling line:

We were unable to substantiate any other allegations made against [redacted]. At his request, we are not repeating unsubstantiated allegations in this document.

A prudent decision.

On Lawsuits

One of the strongest and most universal sentiments shared in response to Ben's post was that threatening a lawsuit was completely unacceptable. A notable example:

More confidently than anything on this list, Nonlinear's threatening to sue Lightcone for Ben's post is completely unacceptable, decreases my sympathy for them by about 98%, and strongly updates me in the direction that refusing to give in to their requested delay was the right decision. In my view, it is quite a strong update that the negative portrayal of Emerson Spartz in the OP is broadly correct. I don't think we as a community should tolerate this, and I applaud Lightcone for refusing to give in to such heavy-handed coercion.

I get the skepticism, but no matter how much you dislike defamation lawsuits, you should like actual defamation less.

Earlier, I linked to a comment emphasizing distrust in established code of professions in favor of another standard: "this group thought about this a lot and underwent a lot of trial by fire and came up with these specific guidelines, and I can articulate the costs and benefits of individual rules."

I am not a romantic about the law. It is an unwieldy, bloated beast that puts people through the wringer even when they win. The powerful can wield it against the weak. It is selectively enforced, in what feel at times like all the worst moments.

In common law countries, though, it is something else as well: the result of collective society thinking a lot, undergoing a lot of trial by fire, and coming up with specific guidelines to bring people as close as possible to being made whole again after they suffer injustices we have collectively deemed to be intolerable. The best judges understand precisely what the law is:

A case is just a dispute. The first thing you do is ask yourself—forget about the law—what is a sensible resolution of this dispute? The next thing ... is to see if a recent Supreme Court precedent or some other legal obstacle stood in the way of ruling in favor of that sensible resolution. And the answer is that's actually rarely the case.

The common law is, for the most part, pleasantly intuitive. I like to say it's all vibes. A great deal of common law hinges on the "reasonable person" standard, either explicitly or implicitly: is it sensible to do this? Good. Then do it. Is it unreasonable? Then don't.

The court of law is, in short and aspirationally, a last-ditch way to force people to right wrongs without escalating to force. Few disputes reach the point of lawsuits. Fewer still make it past discovery and into trials without settlements. Yet fewer see dueling parties fight bitterly up the chain of appeals. Throughout the cases I read as a first-semester law student, a message drilled in by judge after judge throughout history is that nobody wants to see the inside of a court. If you can handle wrongs in your life on your own, not even the judges want you there.

Threats of lawsuits are fundamentally different to other threats. They are, as @Nathan Young put it, bets that the other party is so wrong you're willing to expend both of your time and money to demonstrate it. Rationalists are fond of Yudkowsky's line: "Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever." If it can be had nowhere else, the court is the way to get that counterargument, and I concur with @Daystar Eld that people should not be "shunned, demonized, etc for threatening to use a very core right that they're entitled to."

Making firm statements about the law when I am not a lawyer is perilous, and the legal paper I had to write outlining the ways lawyers can get sued for malpractice for casual false advice to friends is fresh in my mind. Still, my impression is that many here misunderstand libel law somewhat, and the actual standard is worth clarifying. I'll start with a comment from Oliver:

The original post is really quite careful in its epistemic status and in clearly referencing to sources claiming something. You could run this by a lawyer with experience in libel law, and I think they would conclude that a suit did not have much of a chance of success.

I will make no specific legal claims about the original post. Inasmuch as I am interested in the legal standard, it is primarily as a baseline for the ethical standard. It's worth examining, however, the standards of defamation law.


Referencing claims made by specific sources: 

Under Restatement (Second) of Torts § 578, a broadly but not universally accepted summation of common law torts, someone who repeats defamatory material from someone else is liable to the same extent as if they were the original publisher, even if they mention the name of the original source and state they do not believe the claim. Claims of belief or disbelief, while not determinative, come into play when determining damages.

Two Supreme Court cases, St. Amant v. Thompson, 390 U.S. 727 (1968) and Harte-Hanks Communications, Inc. v. Connaughton, 491 U.S. 657 (1989), showcase how people can be liable solely for repeating someone else's defamatory claims. In St. Amant, a politician who read his own questions and someone else's false answers in an interview was found not liable only because actual malice could not be proven. In Harte-Hanks, a newspaper was found liable for libel solely for quoting a witness who falsely claimed she was offered a bribe in exchange for favorable testimony.

Epistemic uncertainty: 

Restatement (Second) of Torts § 566 touches on expressions of opinion, clarifying that opinions are actionable to the extent they are based on express or implied defamatory factual claims.

Per Milkovich v. Lorain Journal Co., 497 U.S. 1 (1990), opinions that rest on factual claims (e.g. "In my opinion John Jones is a liar") can imply assertions of objective fact, and connotations that are susceptible to being proven true or false can still be considered. Opinions are not privileged in a way fundamentally distinct from facts.


In short, you do not dodge liability for defamation by attributing beliefs to your sources or by clarifying you don't know whether an accusation is true. 

Lawsuit threats are distinctly unfriendly. Here's another thing that's distinctly unfriendly: publishing libelous information likely to do irreparable damage to an organization without giving them the opportunity to proactively correct falsehoods. The legal system is a way of systematizing responses to that sort of unfriendliness. It is not kind, it is not pleasant, but it is a legitimate response to a calculated decision to inflict enormous reputational harm.

At the time Nonlinear threatened legal action, they honestly believed that they were about to be libeled and that they had hard material evidence that would be sufficient to prove that libel in a court of law. They may be correct, they may be incorrect, but at the time they made that threat they were already on trial, with Ben Pace as prosecutor and judge alike, and no defense attorney to be found.

A threat of legal action in a circumstance like that should serve not as a defection from a frame of cooperation, but as a reminder that you are already in a fundamentally adversarial frame, having chosen to investigate a group over a long period of time and then publish information to damage them. It should serve as a warning: not "get this information out immediately at all cost," but "If you cannot deescalate, someone will win here and someone will lose. Dot every i. Cross every t. Make your own behavior unimpeachable, because every action you take will be under strict scrutiny."

The adversarial frame began when Alice and Chloe began sharing rumors about Nonlinear that people used to justify changing their behavior around the company members without verifying with them. It continued when Lightcone elected to spend six months digging up all possible negative information about them, when they reached out with a publication date already set, and when they refused to delay publication a moment to allow counter-evidence. At no stage can this be said to have been a collaborative process.

If your goal is to reveal the truth and not to inflict harm on someone, you should wait until you have all sides as thoroughly as you can reasonably get them, not cut that process short when the party you are making allegations against responds with understandable antagonism—until and unless they refuse to cooperate further and have no more useful information to give.

First Principles, Duty, and Harm

The EA/LW community loves to think from first principles, and that is usually one of its finest traits. I notice and respect the times their first-principles thinking leads them to be correct about things broader society is incorrect about—a regular occurrence. Occasionally, though, this manifests in a way satirized by SMBC and many others: confidence that they can outperform others from first principles leading them to make painfully predictable missteps in other fields.

It would be hypocritical of me to criticize the desire to do amateur investigative journalism, to be the one to show up and do things where others do not. Ben Pace, in defending his decision to write his article, used a quote from Eliezer Yudkowsky I am also fond of:

But if there’s one thing I’ve learned in life, it’s that the important things are accomplished not by those best suited to do them, or by those who ought to be responsible for doing them, but by whoever actually shows up.

When you say "I want to make the world a better place," though, you add an implicit "I want power and should be trusted with it." People should do good, say things worth saying, and get involved in causes that matter to them, but every time they do so, they enmesh themselves in a web of responsibilities. The assertion of power is neither trivial nor costless. I do more amateur investigative work than almost anyone else I know of, without formal training, often without pay, and without any stamp of approval from a profession, and Lightcone has and should have the same privilege. But responsibility must accompany it.

Ben felt a clear sense of responsibility to Alice and Chloe. He felt a responsibility, too, to the community of Effective Altruism. Both are admirable. Somewhere along the way, though, spurred by those responsibilities and the feeling that he had a duty to speak out, he stopped feeling that same sense of responsibility to Nonlinear.

One of the most unsavory critics of the rationalist community coined the meme of rationalists as quokkas: profoundly innocent and naïve souls who can't imagine you might deceive them. This describes a failure state of rationalism, I think, but certainly not the central case. He is rightly unpopular around here and I hesitate to give further life to his metaphor by extending it, but in seeing rationalists reinvent the pettiest and most destructive subculture drama I find everywhere else from first principles, all while working to be even-handed and earnest, I have thought of nothing so much as a quokka with a machine gun.

Ben's post, in all honesty, seems naïve: that if you just state you only looked for the negative, people will add it to a carefully balanced judgment rather than treat it as a complete picture; that if you share negative information about someone and the truth comes out later people will simply update and the original damage is undone; that uncertainty about whether someone has done an awful thing should be handled the same way as other public uncertainty—that you can, in short, write a hit piece full of unverified gossip and rumors, but Rational.

That is not flattering, it is not kind, but it is what I see in this saga: First-principles thinking without sufficient consideration towards harm, brushing aside the safeguards people have felt out over centuries of building the common law and codes of ethics. Pure harm, in a sense. Innocent, well-meaning, earnest harm. But harm nonetheless.

What of Nonlinear?

Effective Altruists wish to avoid adjudicating truth claims in court and believe they can and should do better in-house. Very well, but you would do well to adopt some choices from the courts in that process.

Lightcone elected to try Nonlinear in the court of public opinion, putting the question of their reputation to a jury of their peers. They did so by means of a post that was openly biased and contained a wide range of falsehoods for which they concede slight, if any, fault. They offered no semblance of due process, providing a single three-hour phone call to respond to six months of work and declining to examine any further exculpatory evidence. Their post, embraced and accepted by their community, caused immense and irrevocable material harm to Nonlinear. The community had a chance to notice and proactively correct those flaws. It did not and indeed dismissed those who raised them. CEA noticed and endorsed the trial, having likewise deliberately neglected Nonlinear's side of the story.

From all of this, I find myself drawn to only one outcome: Declare a mistrial, likely at least by retracting the initial article with a public apology, the same as responsible journalists do after publishing sufficiently false articles. Was Nonlinear at fault in some of its interactions? Probably! Were they their own worst enemies in the way they responded? Certainly. Does it matter anymore? Not at all. The community mishandled this so badly and so comprehensively that inasmuch as Nonlinear made mistakes in their treatment of Chloe or Alice, for the purposes of the EA/LW community, the procedural defects have destroyed the case.

I know neither Ben nor Oliver but respect their roles in this community and think that they were acting with serious efforts to apply rationalist/EA principles, neither of which I claim the mantle of. I spent the bulk of this essay criticizing their approach in ways that necessarily come off as hostile and painful towards an investigation they poured their hearts into over the course of half a year, but I think the lack of community self-correction to that approach and the failure to heed the red flags raised by Spencer Greenberg, Geoffrey Miller, and others are an order of magnitude more serious than anything either of them did. Inasmuch as people should correct from this, I believe the community as a whole is at fault.

This is my first top-level post on the Effective Altruism forums and, surprisingly, my first on LessWrong as well. I am used to writing to adjacent communities and in my own sphere, not here. I have written at such length here, rather than elsewhere, because I fundamentally and deeply respect many of the discourse norms here. This saga damaged that respect—pretty badly, in some ways—and reveals what I believe to be deep-running structural flaws in this ecosystem, implicating many people I have long followed and respected, but if there is one thing I know and respect about the EA/LW community, it is that you engage seriously and carefully with criticism. 

As a community, you go to great lengths to do good—more, certainly, than I can claim. You're human, though. Give each other some grace.

And hey, next time you need a hit piece written?

Leave it to the New York Times.

  1. ^
  2. ^

    A member of the CEA community health team tells me they "tend to write messages of support for people going through or trying to protect others going through hard things, without necessarily supporting all their methods." I think they in particular have been in a complex spot trying to navigate many competing demands and I sympathize with the difficulty.

  3. ^
  4. ^
  5. ^
  6. ^

    Benefits and pay just aren't 1:1 comparable. I've had a lot of experience living in similar situations. During my early time in the Air Force, living and training expenses were covered in full and I was paid some $2200 a month (pay is public if you'd like more details). This was a great situation for me and I was able to save some 90% of my salary while living comfortably and happily. Later on, though, I got to choose my housing and food and got housing and food stipends added to my salary. I chose cheaper housing and cheaper food and saved much more money as a result.

    Someone wanting to describe my military compensation could do so in several ways:

    1. Raw salary while I got no housing/food allowance, then salary + allowances afterwards. This would be the answer in terms of pure income.
    2. Salary + equivalent value of allowances, both at the start and later. This would have relatively overstated my compensation early on compared to the first option, since I got more money in my pocket without a decline in subjective quality of life when I got money instead of housing and food.
    3. Salary + allowances + benefits (eg free health+dental, later GI Bill, travel). This is an honest account of true compensation, probably the "truest" number I could choose, but it overstates the cash value of every benefit.
    4. My cost to the military. This would be astronomically higher than my compensation given the cost of my training and upkeep. Thinking too much about this number unsettles me.

    Nonlinear, it seems, is choosing somewhere between 3 and 4 to describe compensation. Having employees is expensive, more so when you want them to travel with you. Not all costs to you are reflected in their take-home pay. Military enlistment is not traditionally considered a high-paying career, but an E-1 fresh out of high school makes more take-home pay than Chloe did. That said, claims about military pay aside, I felt my own compensation was extraordinarily generous at every stage of my time in the Air Force. 

    My Mormon mission provides another basis of comparison. At the time I served, every two-year missionary paid $10,000 for the experience. From there, every cost was fully covered by the mission, with a small (few hundred dollar) stipend for food and incidentals that we still conceptualized as "the Lord's money." Costs to the LDS church vary wildly by mission location, but it would be odd to describe those costs as compensation at all. I did not and do not consider this structure abusive. Though I left Mormonism afterwards, my mission was the key formative experience of my life, with some of the worst and best experiences I've had and exposure to a slice of the world I had no other way to experience. 

    I think Nonlinear should have avoided putting a value estimate on benefits since that anchors expectations in an unproductive way, instead simply describing the benefits and letting people work it out for themselves.

  7. ^
  8. ^
  9. ^

    I include this for completeness, but those familiar with the story are probably most familiar with this claim, since Kat posted screenshots demonstrating this in reply to the original article.

  10. ^
  11. ^
  12. ^

    Paying sources, or checkbook journalism, is typically reserved for tabloids and paparazzi in the United States. Most mainstream papers ban it out of concern about introducing conflicts of interest, reducing the journalist's ability to remain objective, and undermining credibility of information. More outlets in Europe follow a cultural norm of being willing to pay, but it is not stinginess that causes most American outlets to shy away from paying sources. 

  13. ^

    I confess I find his position paradoxical: on the one hand, they put more effort and care in than others; on the other, the standard used by professional journalists is too onerous.

  14. ^

    Fears of retribution are the baseline norm for anybody sharing negative information about anybody else with an eye towards broad publication. There are few more common fears to hear from sources.

  15. ^

    It would, however, probably take substantially less than $800k a year to persuade me to become one.

  16. ^

    The full text of the messages I sent, with hypo text truncated:

  17. ^

    I had a long and somewhat confusing conversation with Oliver over whether the panel members endorsed this paragraph, with him claiming they may have either changed their mind about the paragraph or would not believe it applied to the Nonlinear situation based on private conversations he'd had with them. The panelist who I discussed things with stands by everything in the report. 

Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Thanks for writing this! I'd been putting something together, but this is much more thorough.

Here are the parts of my draft that I think still add something:

I'm interested in two overlapping questions:

  1. Should Ben have delayed to evaluate NL's evidence?
  2. Was Nonlinear wrong to threaten to sue?

While I've previously advocated giving friendly organizations a chance to review criticism and prepare a response in advance, primarily as a question of politeness, that's not the issue here. As I commented on the original post, the norm I've been pushing is only intended for cases where you have a neutral or better relationship with the organization, and not situations like this one where there are allegations of mistreatment or you don't trust them to behave cooperatively. The question here instead is, how do you ensure the accusations you're signal-boosting are true?

Here's my understanding of the timeline of 'adversarial' fact checking before publication: timeline. Three key bits:

  • LC first shared the overview of claims 3d before posting.
  • LC first shared the draft 21hr before posting, which included additional accusations
  • NL responded to both by asking for a week to gather evidence that they claime
... (read more)

US defamation law is not strict at all. It bends over backwards to respect the First Amendment. Truth is a complete defense. UK defamation law, and the law of many other un-free countries, is pretty ridiculous and I do think it's unethical to take advantage of that. But in the US it's so hard to bring a defamation suit that even bringing one and not being shot down by the anti-SLAPP law is strong evidence that the plaintiff is in the right.


 I haven't reviewed it closely enough to know if there's enough for a viable defamation suit, but if there is, NL absolutely should sue Pace and those getting mad about that threat should be ashamed of themselves. A norm that you should not bring, or threaten to bring, lawsuits against people who are wronging you in a way the law recognizes as wrong is very very very bad. That norm privileges those who are doing wrong, at the expense of the innocent. 

Thank you for saying this.

I’ve noticed that among those who most strongly condemn the idea of bringing a defamation lawsuit, almost all also assume that Lightcone would win the suit. I have seen nobody make the case that this is a slam-dunk defamation case but that Nonlinear should still never consider pursuing it on principle.

I believe that Nonlinear would win, and that actually doing so as of now would be mildly wrong.

It’s worth distinguishing between the threat they made and bringing the actual lawsuit; in this comment you talk about the lawsuit, but in your clarification below you talk about the threat of one. Even if I lay aside the obvious justification for the threat and only consider the possible harms, they’re so insignificant that I don’t think they’re worth considering; I feel like the threat was well justified.

I can definitely be persuaded of this, and that’s in line with the conclusion I was aiming towards in my last section: lawsuits are a last resort and a sign of an embarrassing failure to resolve disputes any other way. The EA community prides itself on having better-than-median approaches to these things, so it can and should find a satisfactory resolution that does not involve an actual lawsuit.

Presumably you're looking at some sort of arbitration? Here's the chaIlenge I see: in general, the EA community seems very hesitant to bring outsiders into this sort of thing. However, it may often be difficult to find insiders who would be accepted by all as truly neutral, who also have the necessary skillset and bandwidth.

This is a favorable situation for arbitration from a structural point of view -- Lightcone and Nonlinear seem to be roughly the same weight, by which I mean that their ability to present their case to an arbitrator seems roughly equal and that neither seems vastly more powerful in a way that risks an arbitrator deferring to them. In addition, both are probably in a position to pay for the costs of arbitration if they lose. Under those circumstances, some sort of private dispute resolution is viable in a way that it wouldn't be if (e.g.) Open Phil was one of the disputants.

I maintain, of course, that the cleanest and best resolution would be for LC to back off of trying to litigate every specific claim to its maximum capacity and instead acknowledge the ways going on a hunt only for negative information and then refusing to pause to consider exculpatory evidence (including a point they agree was exculpatory on an accusation they agree was significant) poisoned the well in the dispute as a whole—even expecting them to continue to believe they were more-or-less correct about NL.

As things stand, it looks almost inevitable that Ben’s next post will focus primarily on relitigating specific claims and aiming to prove he really was right about NL. Oliver has, however, indicated at least some inclination towards the idea that if that does not persuade people, they will be more open to considering the procedural points I raise here. I believe him in that and am broadly optimistic that what I anticipate will be a lukewarm response to that further litigation will open the door to a clean resolution.

Failing a broadly community-satisfactory outcome from that, it does seem like an ideal case for arbitration or something that fills the same role, yes.

This is helpful; thanks. It brings up something I have been internally musing about (not specifically about your post or comments) --

For this exercise, let's (roughly) condense the criticism of Lightcone into "They have acted like a partisan advocate, rather than a neutral truthseeker." I can think of three ways to go from there, which have a good bit of overlap:

  • "We expect everyone to be a neutral truthseeker here; partisan advocacy is against our norms."
  • "We accept some degree of advocacy from both sides here, but Lightcone went way over the line of permissible advocacy here."
  • "A core problem is that Lightcone presented itself as a neutral truthseeker conducting an 'investigation,' when in fact its actions were that of a partisan advocate."

As someone who is focused more about setting norms in the future than arbitrating Lightcone's specific conduct per se, it might be helpful for future norm-building if Lightcone's critics are clear about the extent to which each of these three pathways explains how they believe that Lightcone went astray. My guess is that it is some combination of number 2 and number 3 for most people, but I don't actually know that.

I recognize that there is all kinds of relevant context, and I really don't mean this as some kind of cheap "gotcha" or whatever, and think there are a bunch of ways in which this is reasonable, but Nonlinear did sound like they were claiming that this was a slam-dunk case in their email to us:  I do think this mattered quite a bit for at least my reaction (though it's more that Ben's reaction here mattered).  Again, I do think there are ways in which saying this kind of thing is understandable, especially under time pressure, and also I am not an expert in libel law, but Nonlinear claiming that the legal case was unambiguous, despite me being reasonably confident that it wasn't, was one of the things that made me interpret this as more of a bluff and an intimidation tactic than a serious attempt to fairly use the legal tools available to achieve a just outcome.

I’m not sure how this would contradict my point. You don’t think the threat was reasonable because you don’t think the case was a slam dunk. If you thought they actually had a slam dunk case against you and were virtually guaranteed to win via summary judgment, do you think a libel suit would be a reasonable threat?

If they actually had a slam dunk case, I would react somewhat differently, though still perceive a libel suit in that context as a very aggressive thing to do.  If Nonlinear had accurately represented their chances of winning, then I would have perceived it as less of an intimidation attempt (like, if the Nonlinear email said "we aren't confident we would win a libel suit, but given the stakes for us we have no choice", I do think that would have caused me to perceive the email pretty differently). Relatedly, if I thought that the case was obviously a slam dunk, and they had said so, that also would have felt less like intimidation to me. It would have still been a kind of risky threat, but as @Jason pointed out in another comment on this post, one of the most pernicious problems with libel suits is invoking the threat of them, without actually ever having to pay up on the cost of going through with them, and overstating your chances of success is correlated with that. I do also think that a lawsuit that was very likely to succeed would be correlated with having more of an ethical case (not strongly, but at least somewhat).
Do you mean “would win” or “would lose” the suit? If the former, the two sentences seem contradictory?
What do you mean? People who think Lightcone would win (ie “no libel”) tend to treat the suit as a threat simply to waste their time and money. I haven’t seen many people who strongly think Nonlinear would win (ie “libel”) but that threatening a (winning) suit would be wrong of them.
Oh I see, I misread the proper nouns

I think you may be assuming in part that the plaintiff is at least a limited purpose public figure and would have to prove actual malice rather than mere negligence. One has to show negligence to collect on a lot of torts. Yes, there are extra/early screening mechanisms because of abusive use of defamation suits, but those exist in other areas too -- like medical malpractice, qualified immunity, etc.

The flipside is that US judgments can be truly eye-popping. I have no love lost for Guliani and even less for Alex Jones, but it's hard for me to accept that compensatory damages for people telling obvious lies about someone should approach an order of magnitude higher than they would likely be for tortiously killing the person (e.g., by speeding). So lower risk than most other places, but the liability if you lose can be devastating.

Are Nonlinear and its employees LPPFs here? I'm not opining beyond noting that is non-obvious to me. One could view this more as a dispute over, e.g., alleged non-provision of vegan food to staff, which doesn't strike me as a matter of public concern.

My statement stands even on a negligence standard. It's even harder to sue as a pubic figure, but truth is an absolute defense regardless. 

Jeff Kaufman
This is a weaker defense than it sounds: a statement can be true while also not turning out to be something you can convince a court is most likely to be true.
Vilfredo's Ghost
Unlikely, but to the extent it's true it mostly favors the defendant. Burden of proof is on the plaintiff.
I disagree that they should necessarily sue if they can win. NL suing would cause further controversy and damage to their reputation. Lawsuits should be a weapon of last resort; in this case, it remains plausible that either Lightcone will eventually apologize, or that NL can win over the community. (Arguably they are in the process of doing so?) A lawsuit is a negative-sum game for the EA community, due to the substantial lawyer fees; depending on the damages, it could be financially negative even for the winner. In the event of a successful lawsuit, I believe we should think very mildly poorly of NL, and extremely poorly of Lightcone.
Vilfredo's Ghost
In the event of a successful lawsuit, we should consider NL fully vindicated and not engage in this sort of reputational retribution for daring to defend their rights. Actions that successfully punish wrongdoers are generally not negative sum because they discourage future misconduct. This is true even where it's negligent and not malicious; knowing one may face consequences encourages greater care in the future.    Edit: I can see reasons it might be unfair to pursue a defamation suit against an unsophisticated/under-resourced party where it's a really close call legally. But we are talking about legally sophisticated parties who effectively spent $ 6 figures' worth of their time on this; the legal fees are chump change compared to what they've already put in. 
I’m suspicious that Lightcone has already been deterred. Even if they haven’t, we should prefer/pursue punishments that don’t involve setting a bunch of money on fire to pay lawyers, with a lawsuit as the last resort; we are not yet at that last resort, and probably won’t ever end up there.

Deterrence isn't merely about Lightcone being deterred from future action, but also about other parties that are considering saying potentially defamatory things regarding others. If they can see that past defamatory statements carried legal consequences, they may be more inclined to exercise greater care; thus harm from future defamatory statements could be avoided.

You touched on something here that I am coming to see as the key issue: whether there should be a justice system within the EA/Rationality community and whether Lightcone can self-appoint into the role of community police. In conversations with people from Lightcone re:NL posts, I was told that is wrong to try to guard your reputation because that information belongs to the community to decide. US law on reputation is that you do have a right to protect yourself from lies and misrepresentation. Emerson talking about suing for libel-- his right-- was seen as defection from the norms which that Lightcone employee thinks should apply to the whole EA/rationality community. When did Emerson opt into following these norms, being judged by these norms? Did any of us? The Lightcone employees also did not like that Kat made a veiled threat to either Chloe or Alice (can't remember) that her reputation in EA could be ruined by NL if she kept saying bad things about them. They saw that as bad not just because it was a threat but because it conspired to hide information from the community. From what I understood, that Lightcone employee thought it would have been okay for Kat to talk shit about... (read more)

You touched on something here that I am coming to see as the key issue: whether there should be a justice system within the EA/Rationality community and whether Lightcone can self-appoint into the role of community police. 

Pretty much every community has norms and means of enforcing those norms ("social control" to the sociologists). Those means may be more or less formal, but I don't think communities are very viable without some means of norm enforcement. I think "justice system" implies something significantly different than what has happened here: e.g., the US justice system can throw me in a dungeon and take away all my money. To use a private example, if I were Catholic, the Catholic justice system could excommunicate me, defrock me as a priest, etc. A campus justice system can expel or fire me.

What happened here feels more like gossip on steroids. Lightcone said bad things about Nonlinear, which had the effect of decreasing community opinion of Nonlinear. That might in turn have concrete adverse effects on Nonlinear. But as far as I know: Lightcone did not, and could not, directly impose consequences on Nonlinear unmediated by the actions of the community.

Likewise, I do... (read more)

What you’re missing is that Lightcone is not just another citizen. They control a lot of money and influence. If Ben and Oli were just regular citizens these criticisms wouldn’t carry undue weight. If Alice and Chloe had published their experiences themselves, I think people would have interpreted them more in proportion (and they would have exposed to way risk), which would have been a lot closer to the system you’re talking about.

It is your right (and mine, and everyone else's) to decide not to associate with Lightcone, Nonlinear, both, or neither based on your assessment of their various actions.

I don’t know you, but it sounds like you don’t live on EA/Rat grants. If you did, you would know it’s way more advantageous to side with Lightcone. Many would feel they could not afford not to. (Full disclosure: I have a Lightspeed grant, and obviously I feel okay criticizing Lightcone, but I might hesitate more if they were my only funding source.)

This is an excellent comment.

Concretely, I sometimes hear organization leaders say that they choose to have their organization not be "EA" because doing so opens them to criticism from random people on the EA Forum, and this doesn't occur if they just describe themselves as working on "alternative proteins" or whatever.

(Although in this particular case, it's not clear to me that Ben Pace wouldn't have chosen to investigate Nonlinear if they didn't self describe as "EA". It seems like his investigation was triggered by them using Lightcone.)

I feel like we should also be discussing FTX here. My model of the Lightcone folks is something like:

  1. They kinda knew SBF was sketchy.

  2. They didn't do anything because of diffusion of responsibility (and maybe also fear of reputation warring).

  3. FTX fraud was uncovered.

  4. They resolved to not let diffusion of responsibility/fear of reputation warring stop them from sharing sketchiness info in the future.

If you grant that the Community Health Team is too weak to police the community (they didn't catch SBF), and also that a stronger institution may never emerge (the FTX incident was insufficient to trigger the creation of a stronger institution, so it's hard to imagine what event would be sufficient), there's the question of what "stopgap norms" to have in place until a stronger institution hypothetically emerges.

Even if you think Lightcone misfired here -- If you add FTX in your dataset too, then the "see something? say something!" norm starts looking better overall.

With regard to explicit agreements: One could also argue from the other direction. No one in EA explicitly agreed to safeguard the reputation of other EAs. You say: "If individuals want to give a company a bad r... (read more)

I have very little inside perspective on SBF, but my general take on FTX is that there was not enough shady info known outside of the org to stop the fraud. (What’s the mechanism? Unless you knew about the fraud, idk how just saying what you knew could have caused him to change his ways or lose control of his company.) It’s possible EA/rationality might have relied less on SBF if more were known, but you have to consider the harm of a norm of sharing morally-loaded rumors as well.

The risk of a witch hunt environment seems worse to me than the value of giving people tidbits of info that a perfect Bayesian could update on in the correct proportion but which will have negative higher-order effects on any real community that hears it.

Ebenezer Dukakis
Habryka seems to think there was significant underreaction to shady info: https://forum.effectivealtruism.org/posts/b83Zkz4amoaQC5Hpd/time-article-discussion-effective-altruist-leaders-were?commentId=nGxkHbrikGeTxrLjZ I think you have to balance cost of false negatives against cost of false positives.

Asking out of ignorance here, as I was only exposed to the general news version and not EA perspectives about FTX. What difference would it have made if FTX fraud was uncovered before things crashed? Is it really that straightforward to conclude that most of the harm done would have been preventable?

I think the claim is not that fraud would have been uncovered, but rather that rumors about SBF acting deceptively would have been shared. (See e.g. this post as an example of what might have been shared.)

Even if you think Lightcone misfired here -- If you add FTX in your dataset too, then the "see something? say something!" norm starts looking better overall.

No, I don't think it does. You also need to assume that a "see something? say something!" rumor mill would have actually had any benefit for the FTX situation. I'm pretty sure that's false, and I think it's pretty plausible it would be harmful.

(1) The fraud wouldn't have become publicly known under this norm, so I don't think this actually helps.

(2) I don't think it would be correct for EA to react strongly in response to the rumors about SBF- there are similar rumors or conflicts around a very substantial number of famous people, e.g. Zuckerberg vs. the Winklevoss Twins.

(3) Most importantly, how we get from "see something? say something?" to "the billionaire sending money to everybody, who has a professional PR firm, somehow ends up losing out" is just a gigantic question mark here. To me, the outcome here is that SBF now has a mandate to drive anybody he can dig up or manufacture dirt on out of EA. (I seem to recall that the sources of the rumors about him went to another failed crypto hedge fund that got sued; I can't find a source, but even if that didn't actually happen it would be easy him to make that happen to Lantern Ventures.) (I expect that the proposed "EA investigative journalist" would have probably been directly paid by SBF in this scenario.)

Ebenezer Dukakis
If EA disavowed SBF, he wouldn't have been able to use EA to launder his reputation. In this case it would've been correct, because the rumors were pointing at something real. We know that with the benefit of hindsight. One has to weigh false positives against false negatives. I'm not saying rumors alone are enough for a disavowal, I'm saying rumors can be enough to trigger investigation. I think a war between SBF and EA would have been good for FTX users -- the sooner things come to a head, the fewer depositors lose all their assets. It also would've been good for EA in the long run, since it would be more clear to the public that fraud isn't what we're about. Your point about conflict of interest for investigative journalists is a good one. Maybe we should fund them anonymously so they don't know which side their bread is buttered on. Maybe the ideal person is a freelancer who's confident they can find other gigs if their relationship with EA breaks down.
To be clear, what I'm saying is that SBF would just flat out win, and really easily too, I wouldn't expect a war. The people who had criticized him would be driven out of EA on various grounds; I wouldn't expect EA as a whole to end up fighting SBF; I would expect SBF would probably end up with more control over EA than he had in real life, because he'd be able to purge his critics on various grounds. I don't think that's enough; you'd need to not only fund some investigators anonymously, you'd also need to (a) have good control over selecting the investigators, and (b) ban anybody from paying  or influencing investigators non-anonymously, which seems unenforceable. (Also, in real life, I think the investigators would eventually have just assumed that they were being paid by SBF or by Dustin Moskovitz.)

To be clear, what I'm saying is that SBF would just flat out win, and really easily too, I wouldn't expect a war. The people who had criticized him would be driven out of EA on various grounds; I wouldn't expect EA as a whole to end up fighting SBF; I would expect SBF would probably end up with more control over EA than he had in real life, because he'd be able to purge his critics on various grounds.

What would it take for EA to become the kind of movement where SBF would've lost?

I don't think that's enough; you'd need to not only fund some investigators anonymously, you'd also need to (a) have good control over selecting the investigators, and (b) ban anybody from paying or influencing investigators non-anonymously, which seems unenforceable. (Also, in real life, I think the investigators would eventually have just assumed that they were being paid by SBF or by Dustin Moskovitz.)

I agree that the ideal proposal would have answers here. However, this is also starting to sound like a proof that there's no such thing as a clean judicial system, quality investigative journalism, honest scientific research into commercial products like drugs, etc. Remember, it's looking like S... (read more)

I sorta feel like this is barking up the wrong tree, because: (a) the information that SBF was committing fraud was private and I cannot think of a realistic scenario where it would have become public, and (b) even if widely spread, the public information wouldn't have been sufficient. Before FTX's fall, I'd remarked to several people that EA's association with crypto (compare e.g. Ben Delo) was almost certainly bad for us, as it's overrun with scams and fraud. At the time, I'd been thinking non-FTX scams affecting FTX or its customers, not FTX itself being fraudulent; but I do feel like the right way to prevent all this would have been to refuse any association between EA and crypto. Good point! I'm probably being overly skeptical here, on reflection.
I think @chinscratch may have meant: What would it take for EA to become the kind of movement where SBF would've lost in his hypothetical efforts to squelch discussion of his general shadiness, and run those folks out of EA? EA couldn't have detected or stopped the fraud in my opinion, but more awareness of shady behavior could have caused people to distance themselves from SBF, not make major decisions in reliance on FTX cash, etc.
nit: I don't know what Ben values his times at, though my guess is it's generally not $800k/yr. Ben just said that he would consider doing this kind of work for $800k/year. This kind of work is really quite stressful, so it likely comes a premium compared to other kinds of work, and might be more expensive than how much Ben otherwise values his time, I am confident it would for the great majority of people.  My guess is most people would charge substantially more money to take on a dangerous job, or one they really don't enjoy, or one that involves a lot of pain and stress than they would usually.  (Separately, I don't currently know of investigative journalists you can hire this way. Hiring an investigative journalist for a bunch of EA stuff was one of the primary things I was arguing for at the most recent EA Coordination Forum. I think it's a great idea, but it's not a great stopgap norm because it's genuinely quite hard to hire for, or at least I don't super feel capable of doing it. Financially I would be willing to contribute quite a lot of money from a mixture of Lightcone, grantmaking, and personal funds)

Great journalists are getting laid off all the time these days. You could find any number of professional and highly accomplished journalists for a tiny fraction of $800k per year. 

If you have any references for good ones, please send them to me! I think this kind of job is quite hard and many (my guess is most) journalists would not live up to a standard that I think would be acceptable in this kind of job, but I do think there are some, and I would love to talk to them about this.
Stuart Buck
I don't have personal references, but a ton of great journalists have gotten laid off in 2023, and they never were paid that much in the first place (not being TV or celebrity journalists).  https://www.poynter.org/business-work/2023/buzzfeed-news-closed-180-staffers-laid-off/ https://www.sfgate.com/tech/article/wired-layoffs-conde-nast-magazine-18550381.php https://www.washingtonpost.com/style/media/2023/10/10/washington-post-staff-buyouts/
These are two different questions!  EA already has a justice system of sorts- the CEA Community Health Team. Ben chose to do this because he thought it was ineffective. The second question should instead be, whether Lightcone can self-appoint themselves as a replacement for the CHT. The fact that somebody thought the CHT was ineffective and tried to replace it, then immediately faceplanted, makes me more confident in the actually existing CHT. (In particular, if Alice did indeed lie to Ben, it's then pretty likely that she said she didn't trust the CHT/didn't want info shared with them because they would fact-check her claims.)

But CEA CHT doesn’t cancel people. They just answer questions about people in the most general way possible if you ask and maybe ban them from CEA-sponsored programs and events like EAGs. No coincidence that Julia Wise set it up and is a social worker by training.

(Anyone who knows more, please fee free to correct/elaborate on what CHT does.)


Thank you for putting so much effort into helping with this community issue. 

What do you think community members should do in situations similar to what Ben and Oliver believed themselves to be in: where a community member believes that some group is causing a lot of harm to the community, and it is important to raise awareness?

Should they do a similar investigation, but better or more fairly? Should they hire a professional? Should we elect a group (e.g., the CEA community health team (or similar)) to do these sorts of investigation? 

All of those are reasonable options. The money Ben paid to sources would go a long way towards hiring a professional—it's almost as much as I make in my (part-time) journalism-adjacent work in a year.

Like I say, I'm not averse to citizen journalism and it would be incredibly hypocritical of me if I was. There's a lot amateurs can do in this sort of thing, but I think it requires willingness to act as something other than prosecutor—or, if you see yourself only able to act as prosecutor, to provide your evidence to a neutral third party who can get the other side of things.

Electing a group puts a lot of pressure on that group, but it's approximately what any large enough organization seems to do to handle misconduct allegations and has the advantage of allowing clear and predictable structure.

Most people in Ben's shoes won't have access to resources like those of Lightcone. So I think that is more a response to a narrower version of Peter's question (what someone similarly situated to Ben and Oli should do in an analogous situation) than it is to the broader reading of the question.

It's not clear to me how much professional time would have been needed for a full and fair investigation of this matter; do you have a sense of what the going rate for work like this would be?

I don't, no. Obviously it could stretch almost arbitrarily high, but I expect there are competent people who would tackle it for $50-$100/hour, for example. It's also pretty common for journalists who take interest in a topic to pursue it of their own initiative, but they have their own incentives and work (indeed, must work) towards different goals than the people who tip them off to things.

The broader reading is hard to answer because people vary wildly in their skills, interests, and resources. If I were advising myself, for example, I would say "Start poking around and see what pops up," because that's what I've always done. If I were advising someone with lots of money but little time, I would say "Spend your money to find someone with more time," because that seems like a pretty universal problem-solver. If I were advising someone with a large public presence, I would say "Call in a few favors from people you know," because reputation is power.

Let's go for the purest case, though: a random community member with little experience, money, or presence. The most important thing is finding somebody credible who has those. You can try tipping leads to trustworthy journalists (yeah ... (read more)

I have many other investigations I would like to see happen, so if there is a way to buy this as a service, in a way that will produce decent investigations, then I would very gladly pay substantial sums of my personal money for this. 

We did go around and ask many other people whether they would be willing to take on this investigation, all of which declined, but we didn't explore the option of paying a large sum directly for this, and I at least don't have any contacts for people I expect would be willing to do this.

If you have any references to people who could be paid to do this kind of work in a high-integrity way, I would really love to be put in touch with them.

Good question. Let me think on it.

The answer I’m tempted towards is, of course, “Me”—for things that can be done remotely, at least. It’s something I’ve done professionally only in the context of podcast production and I don’t want to claim extraordinary experience or overstate my qualifications, but I do think I have a few unusual traits that make me well-suited for that sort of thing and it’s a process I take seriously.

Part of why I hesitate there is that I can’t do anything near-future (next five months or so) due to my commitments and don’t want to throw my hat in a ring where I’m too busy to ever do anything, but it’s a general style of work I enjoy and am open to.

I prefer to do more than idly self-promote, though—I’ll consider the question further.

Linda Linsefors
I think paying a friendly outsider would be the best option. I don't expect I have much say in this, since I don't have much spare money, so I will not be the one heiring. But I would like TracingWoodsgrains to look into the Nonlinear story. 

Insightful and well-argued post!

  • I found the hypothetical about NYT and CEA helpful for reasoning from first principles about acceptable journalistic practice. I came out of it empathizing more with Nonlinear's feelings before and during the publication of Ben Pace's article than I previously had.
  • Regarding Ben Pace's explicit seeking of negative information and unwillingness to delay posting, you updated me from thinking of these as simple mistakes to now considering them egregiously bad.
  • Great point that an article author can't just state their disclaimers at the top and expect readers to rationally recalibrate themselves and ignore the vibes of the evidence's presentation.

I found it hard to update throughout this story because the presentation of evidence from both parties was (understandably) biased. As you pointed out, "Sharing Information About Nonlinear" presented sometimes true claims in a way which makes the reader unsympathetic to Nonlinear. Nonlinear's response presented compelling rebuttals in a way which was calculated to increase the reader's sympathy for Nonlinear. Both articles intentionally mix the evidence and the vibes in a way which makes it difficult to readers to separate the two. (I don't blame Nonlinear's response for this as much, since it was tit for tat.)

Thanks again for putting so much time and effort into this, and I'm excited to see what you write next.


I'll just quickly say that my experience of this saga was more like this: 

Before BP post: NL are a sort of atypical, low structure EA group, doing entrepreneurial and coordination focused work that I think is probably positive impact.
After BP post: NL are actually pretty exploitative and probably net negative overall. I'll wait to hear their response, but I doubt it will change my mind very much.
After NL post: NL are probably not exploitative. They made some big mistakes (and had bad luck) with some risks they took in hiring and working unconventionally. I think they are probably still likely to have a positive impact on expectation. I think that they have been treated harshly.
After this post: I update to be feeling more confident that this wasn't a fair way to judge NL and that these sorts of posts/investigations shouldn't be a community norm. 

Threats of lawsuits are fundamentally different to other threats. They are, as @Nathan Young put it, bets that the other party is so wrong you're willing to expend both of your time and money to demonstrate it. 

I don't think that's quite right. Threats of lawsuits are extremely cheap -- it takes ten seconds max to type "I'll sue you!" They are also commonly made without a reasonable basis in law or fact, and without any intent to actually follow through. They are often used to inappropriately silence speech and truthseeking. 

Hiring a lawyer and filing a complaint gets closer to a bet, as you are putting some significant money and time on the line. However, people definitely file defamation suits as a form of PR management without any real intent to see them to trial, expecting that observers will see the filing as a marker of earnestness and/or a signal to withhold judgment until legal proceedings are complete. By the time the case is withdrawn or settled on confidential terms, public interest has moved on and nobody pays much attention. [1]

So evaluating whether it was an appropriate to threaten litigation requires an assessment of whether the threatening party would... (read more)

I'd like to re-iterate that we did not say we would sue if they published. We said we would sue if they didn't give us time to share the evidence with them

They were not supporting discourse. They were trying to avoid discourse. 

Ben explicitly said he'd updated about Alice's reliability based on our conversation and evidence we showed in the call. 

He knew he was about to receive even more evidence that would mean he'd wasted $10,000 and his last 6 months listening to a person who has a reputation of saying false and misleading things. 

He just didn't want to see that evidence. And he did masterful frame control by trying to make it seem like if we did share this evidence, that was us being "retaliatory" or somehow unethical. 

He then went around saying we'd threatened to sue if he published (link to one comment of many where they do this), when we couldn't have made it more clear that that wasn't the case. 

Excerpt from email we sent to Ben. Bolding in the original. 

This sort of suing threat wouldn't chill the discourse and make it so people wouldn't post valid criticisms of an EA org. EA is all about publishing ri... (read more)

Thanks, Kat. To be clear, I am not expressing an opinion about whether it was appropriate to make a litigation threat in the circumstances in which Nonlinear found itself. That's in part because I don't actually have an opinion on that point -- reaching an opinion I'd feel comfortable asserting would require hours of pouring over posts and comments with that question in mind, which doesn't seem a good use of my time.

My concern was that TracingWoodgrains' analysis about "Threats of lawsuits" generally -- the plural is in the original -- appeared to conflate threatening to sue and actually seeing a case to trial. The latter is a costly signal; the former is not. I think it would be unfortunate if the community came to view threats to sue as a credible signal of correctness; that would incentivize making many more of those threats in circumstances where it was not appropriate to do so.

This is a fair distinction, but it does make me want to toy with a peculiar thought. As things stand in this community, threats to sue are themselves a costly signal of something even independent of proceeding to trial, because the counterparty can publicize them and get a lot of support from people furious that someone is breaking a norm against such threats.

He then went around saying we'd threatened to sue if he published (link to one comment of many where they do this), when we couldn't have made it more clear that that wasn't the case. 

Excerpt from email we sent to Ben. Bolding in the original. 

But then, from the next paragraph of that same email:

...if published as is we intend to pursue legal action for libel against Ben Pace personally and Lightcone for the maximum damages permitted by law.

It seems to me that you and Emerson are trying to have it two ways.  On the one hand, the email clearly says that you only wanted time.  On the other hand, the email also clearly says that if Ben gave you that time and then didn't respond the way you wanted, you were still going to sue him.  "we'd threatened to sue if he published" is a much more accurate summary of that email than "We said we would sue if they didn't give us time to share the evidence with them. " IMO.

(note, I haven't read Ben's original piece, just your rebuttal)

Kat Woods
When we said "publish it as is" we meant published now without having seen the evidence. Also, you cut that quote out of its context. The full quote is “Given the irreversible damage that would occur by publishing, it simply is inexcusable to not give us a bit of time to correct the libelous falsehoods in this document, and if published as is we intend to pursue legal action”.  You could try to make the case that they didn't know that and it's ambiguous, but I think it's more than made up for by when we say explicitly that we were not doing this and we bolded it.  With that as context I find it really hard to believe that they genuinely believed that we would sue them if they published it unchanged a week later after seeing our evidence. Like, imagine telling somebody, in bold "we are not asking for X. We're asking for Y". The whole email is making a case for Y. The email ends with us saying in bold again “Please Y” (precisely, we end the email saying, in bold “Please wait a week for the evidence. To do otherwise violates the community’s epistemic norms.”). However, in the email, there's a single ambiguously worded sentence that fits incredibly well with Y but could also plausibly be X.  In such cases, people should interpret the ambiguous sentence as us asking for Y unless there’s strong evidence to the contrary. 

I had all that context when I read it, and the reading you're giving here still didn't occur to me.  To me it says, unambigiously, two contradictory things.  When I read something like that I try to find a perspective where the two things don't actually conflict.  What I landed on here was "they won't sue Ben so long as he removes the parts they consider false and libelous, even if what's left is still pretty harsh".  "Nonlinear won't sue so long as Ben reads the evidence, no matter what he does with it" isn't quite ruled out by the text, but leaves a lot of it unexplained: there's a lot of focus on publishing false information in that email, much more than just that one line.  It doesn't really seem to make logical sense either: if some of Ben's post is libelous, why would his looking at contradictory evidence and deciding not to rewrite anything make it better?

Anyway, that's my thought process on it; if I'd got that email -- again, knowing nothing about you folks except what you wrote in the rebuttal post, and I guess that one subthread about nondisparagement agreements from the original -- then I'd certainly have taken it as a threat, contingent on publi... (read more)

This "unambiguous" contradiction seems overly pedantic to me. Surely Kat didn't expect Ben would receive her evidence and do nothing with it? So when Kat asked for time to "gather and share the evidence", she expected Ben, as a reasonable person, would change the article in response, so it wouldn't be "published as is".
David Seiler
Why not?  According to Nonlinear, they had already told Ben they had evidence, and he'd decided to publish anyway: "He insists on going ahead and publishing this with false information intact, and is refusing to give us time to provide receipts/time stamps/text messages and other evidence".  Ben already wasn't doing what Nonlinear wanted; the idea that he might continue shouldn't have been beyond their imagination.  Since that's unlikely, it follows that Lightcone shouldn't have believed it, and should instead have expected that Nonlinear's threat was meant the way it was written. More broadly, I think for any kind of claim of the form "your interpretation of what I said was clearly wrong and maybe bad faith, it should have been obvious what I really meant", any kind of thoughtful response is going to look pedantic, because it's going to involve parsing through what specifically was said, what they knew when they said it, and what their audience knew when they heard it.  In this kind of discussion I think your pedantry threshold has to be set much higher than usual, or you won't be able to make progress.
(This was indeed my interpretation when I read it. Maybe it was wrong and didn't align with the intent of what Nonlinear tried to communicate, which would be unfortunate, but I think my interpretation was a reasonable one. Commenters on the original post also seemed to think that this interpretation was reasonable.)
Within Reason
I think that's right, but this case seems to present a strong case for actual legal defamation. I saw the threat of a libel suit as credible, and I won't be surprised if one gets filed
Nonlinear indicate in their appendix that they've decided against suing. (Source; it's the paragraph immediately above the linked bookmark). For my part I'm moderately inclined to be dismissive of the lawsuit threat, in main because the rule-of-thumb I've personally heard about threatening to sue is that, if you're serious about it, you should consult your lawyer before you send the threat; I'm not a lawyer myself, though, and am unsure whether it gives the right output here (especially given that their preferred outcome was for Ben to delay the post without their suing).

Thank you for this. I think Nonlinear made several poor stylistic choices in their response, which likely resulted in many people giving up on reading their post before seeing the evidence against some of Ben Pace's egregious claims. Your post does a better job of clearly and forcefully arguing that Ben Pace's post was misguided than their response did. 

Glad to hear it! I agree with your assessment of the stylistic choices—part of the goal in my post was to divorce those stylistic choices (which I read primarily as evidence of the difficulty of responding with precision under community pressure and during a time of intense stress) from the new concrete evidence that was provided.

The 'stylistic choices' were themselves evidence of wrongdoing, and most of their evidence against claims both misstated the claims they claimed to be refuting and provided further (unwitting?) evidence of wrongdoing.

If you have time, can you provide some examples of what you saw as evidence of wrongdoing? 

I feel that much of what I saw from my limited engagement was a valid refutation of the claims made. For instance, see the examples given in the post above.

There were responses to new claims and I saw those as being about making it clear that other claims, which had been made separately from Ben's post, were also false.

I did see some cases where a refutation and claim didn't exactly match, but I didn't register that as wrongdoing (which might be due to bias or not realising the implications of the changes etc)

Also, are you sure it is fair to claim that most evidence they provided misstated the claims and provided evidence of wrong doing? Was it really most of the evidence, or just (potentially) some of it?

The most obviously questionable choice was, of course, their questionable usage of quotation marks, which is still only mentioned in the appendix. This introduced substantial confusion as to whether their responses to ostensible quotations in fact addressed the claims made in the original post, and this was exacerbated by their extensive editorialisation. I am not interested in legislating claims, but do notice their determination to muddy the waters, and find it very indicative that they believed this was their best path forward.

Another couple things that stood out.

This 'illustration' is quite a choice.

Why did images and captions like these feature so prominently in the post, including right near the beginning?

This is a document written over more than three months and reviewed by more than ten people. The stylistic and editorial choices are not accidents, or innocent oversights; at the very least, the specific accidents and oversights are suggestive of behavioural tendencies. Ultimately, they seem—in the comments on the original article, and in their later response—manipulative in ways that make the original allegations seem plausible to me. Whether they are in fact true is, of co... (read more)

If you have time, can you provide some examples of what you saw as evidence of wrongdoing? 

I didn't interpret any of these as evidence of the original wrongdoing, but these were the main things Kat did in her evidence thread that, in my opinion, muddled Nonlinear's defense:

  1. Lots of motte-and-bailey/strawmanning their critics, like claiming to refute an allegation but then providing evidence that they didn't do some other, more egregious thing, or saying that the only way something bad could have come out of their actions was that they were "secretly evil"
  2. Selectively engaging only with comments that made them look good, and avoiding responding to comments that looked more incriminating
  3. Abuse of quotation marks, such that most of the time when they claimed someone else said something, the other person had not actually said that thing, but something else that sounded like it, modulo Kat's interpretation.

There was also the section where they may have fabricated allegations against Ben Pace to make a point that anyone can make anyone else sound bad, though I thought the analogy did not quite work and some people thought was deranged. But I'm not sure if that part is substantiated or ... (read more)

You criticize Ben for not waiting a week for a response, and yet you yourself were unable to wait a week for his response to the recent article. If you look in the comments, a lot of people have found significant misrepresentations in that article: the author paraphrased Chloe incorrectly, put those paraphrases in quotes as if they were her actual words, and then "debunked" those. 

I'm worried that the response to one side presenting a biased account is to present an equally biased account in the other direction. The presentation about the new york times here appears to be an attempt to harness tribalistic feelings about hated outsiders. But what actually matters is the truth. I don't think Ben's original article ended up entirely truthful, and he should apologise for that. But Kat's reply was not entirely truthful either. 

Would a culture of lawsuits in EA help expose truth? I don't think so. I think it would lead to the rich and powerful being handed a bludgeon they can use to prevent the powerless from speaking out about exploitation and harm. I wish this investigation was handled better, but I am still glad that it happened.  

The great majority of my post focuses on process concerns. The primary sources introduced by Nonlinear are strong evidence of why those process concerns matter, but the process concerns stand independent. I agree that Nonlinear often paraphrased its subjects before responding to those paraphrases; that's why I explicitly pulled specific lines from the original post that the primary sources introduced by Nonlinear stand as evidence against.

My ultimate conclusion was and is explicitly not that Nonlinear is vindicated on every point of criticism. It is that the process was fundamentally unfair and fundamentally out of line with journalistic standards and a duty to care that are important to uphold. Not everyone who is put in a position of needing to reply to a slanted article about them is going to be capable of a perfectly rigorous, even-keeled, precise response that defuses every point of realistically defusable criticism, which is one reason people should not be put in the position of needing to respond to those articles.

I was one of those who criticized Kat's response pretty heavily, but I really appreciated TracingWoodgrains' analysis and it did shift my perspective. I was operating from an assumption that Ben & Hab were using an appropriate truthseeking process, because why wouldn't they? But now I have the sense that they didn't respond to counterevidence from Spencer G (and others), and the promise of counterevidence from Nonlinear, appropriately. So now I'm confused enough to agree with TW's conclusion: mistrial!

(edit: mind you, as my older comments suggest, in the end I won't end up thinking Kat did nothing wrong at all. This post raises doubts about Ben's approach to the case, though, due to which it's hard to tell out how bad or not bad the conduct was.)

A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future.

I think this is meant to be the fantasy version of Closing Notes on Nonlinear Investigation, where Ben writes,

I don't really want to do more of this kind of work. Our civilization is hurtling toward extinction by building increasingly capable, general, and unalignable ML systems, and I hope to do something about that. Still, I'm open to trades, and my guess is that if you wanted to pay Lightcone around $800k/year, it would be worth it to continue having someone (e.g. me) do this kind of work full-time. I guess if anyone thinks that that's a good trade, they should email me.

I understand that you're taking liberties for your allegory in imagining a "triumphant follow-up", but I think it's worth being clear that the actual follow-up all but states that this was a miserable experience and not worth the time and effort:

I did not work on this post because it was easy. I worked on it because I thought it would be easy. I kept wanting to just share what I'd learned. I ended up spending about ~320 hours (two months of work), ove

... (read more)

Someone can be at once exhausted and triumphant after completing a major, complicated, lengthy process, and that combination of tones is what I took from the post. See eg:

When I saw on Monday that Chloe had decided to write a comment on the post, I felt a sense of "Ah, the job is done." That's all I wanted.

Just to provide the full quote, since I think it's kind of confusing standing on its own: 

I worked on this for far too long. Had I been correctly calibrated about how much work this was at the beginning, I likely wouldn't have pursued it. But once I got started I couldn't see a way to share what I knew without finishing, and I didn't want to let down Alice and Chloe.

My goal here was not to punish Nonlinear, per se. My goal was to get to the point where the accusations I'd found credible could be discussed, openly, at all. 

When I saw on Monday that Chloe had decided to write a comment on the post, I felt a sense of "Ah, the job is done." That's all I wanted. For both sides to be able to share their perspective openly without getting dismissed, and for others to be able to come to their own conclusions.

I have no plans to do more investigations of this sort. I am not investigating Nonlinear further. If someone else wants to pick it up, well, now you know a lot of what I know!

TracingWoodgrains - thanks for an excellent post. I think it should lead many EAs to develop a new and more balanced perspective on this controversy. 

And thanks for mentioning my EA Forum comments about Ben Pace doing amateur investigative reporting -- reporting that doesn't seem, arguably, to have lived up to the standards of basic journalistic integrity (regardless of how much time he and the Lightcone team may have put into it.)

This leaves us with a very awkward question about the ongoing anonymity of 'Alice' and 'Chloe', and I don't know what the right answer is about this issue, but I'm curious what other EAs think.

We seem to be in a situation where two disgruntled ex-employees of an EA organization coordinated to spread very harmful, false or highly exaggerated claims about the organization with the deliberate intent of slandering it and harming its leaders. They convinced someone with power and influence in the community to spend a lot of time confirming their claims, writing a highly negative public report, and paying them as whistleblowers/informants. Later, the slandered organization published a long refutation of the ex-employees' claims, showing that many of them w... (read more)

Whistleblower anonymity should remain protected in the vast majority of situations, including this one, imo

How would you define the set of circumstances that are not in the "vast majority"? My initial reaction is vaguely along the lines of: lack of good faith + clear falsity of at least the main thrust of the accusation + lack of substantial mistreatment of the psuedonymous person by their target. But how does one judge the good faith of a psuedonym?

Whistleblower protection is necessary when Abe provides evidence that Bill harmed Cindy; otherwise, Abe lacks incentive to help Cindy. It is less important when Abe defends himself against harm caused by Bill.

There's something to this, but I don't think the incentives argument maps neatly onto the presence/absence of third parties. It's not entirely clear to me what tangible incentive "Alice" and "Chloe" would have to tell their stories to Ben with permission to share with the broader public. The financial payment seems to have not been anticipated. Having proceeded under pseudonyms, the bulk of any sympathy they might get from the community wouldn't translate into better real-world outcomes for the individuals themselves. 

In these kinds of cases, the motive will often be psychological. People in this position could be motivated by altruistic motives (e.g., a desire for others not to experience the same things they believe they did) or non-altruistic motives (e.g., a hope that the community will roast people who the pseudonymous individuals believe did them wrong). In the former case, a default norm of respecting pseudonymity is important. Altruistic whistleblowers aren't getting much out of it themselves (and are already devoting a lot of time and stress to the communal good).

-13 karma from 5 votes for a comment that doesn't seem to break any Forum norms? Odd

Even if the whistleblowers seem to be making serial false allegations against former employers?

Does EA really want to be a community where people can make false allegations with total impunity and no accountability? 

Doesn't that incentivize false allegations?

Has there been a suggestion that Chloe has made serial false allegations against former employers? I thought that was only Alice.

There's a unilateralist's curse issue here -- if there are (say) 100 people who know the identities of Alice and Chloe, does only one of them have to decide breaching the psuedonyms would be justified? [Edit to add: I think the questions Geoffrey is asking are worthwhile ones to ask. I am just struggling to see how an appropriate decision to unmask could be made given the community's structure without creating this problem. I don't see a principled basis for declaring that, e.g., CHSP can legitimately decide to unmask but everyone else had better not.]

So, what do you all think?

I continue to think that something went wrong for people to come away with takes that lump together Alice and Chloe in these ways. 

Not because I'm convinced that Alice is as bad as Nonlinear makes it sound, but because, even based on Nonlinear's portrayal, Chloe is portrayed as having had a poor reaction to the specific employment situation, and (unlike Alice) not as having a general pattern/history of making false/misleading claims. That difference matters immensely regarding whether it's appropriate to warn future potential employers. (Besides, when I directly compare Chloe's writings to Nonlinear's, I find it more likely that they're unfair towards her than vice versa.)

FWIW, I'm not saying that coming away with this interpretation is all your fault. If someone is only skim-reading Nonlinear's post, then I can see why they might form similarly negative views about both Alice and Chloe (though, on close reading, it's apparent that also Nonlinear would agree that there's a difference). My point is that this is more a feature of their black-and-white counterattack narrative and not so much appropriate for what I think most likely happened.

Geoffrey Miller
Lukas - I guess one disadvantage of pseudonyms like 'Alice' and 'Chloe' is that it's quite difficult for outsiders who don't know their real identities to distinguish between them very clearly -- especially if their stories get very intertwined. If we can't attach real faces and names to the allegations, and we can't connect their pseudonyms to any other real-world information about them, such as LinkedIn profiles, web pages, EA Forum posts, etc., then it's much harder to remember who's who, and to assess their relatively degrees of reliability or culpability.  That's just how the psychology of 'person perception' works. The richer the information we have about people (eg real names, faces, profiles, backgrounds), the easier it is to remember them accurately, distinguish between their actions, and differentiate their stories.
You're right about the effort involved, but when these are real people who you are discussing deanonymizing in order to try to stop them from getting jobs, you should make the effort.

Well all three key figures at Nonlinear are also real people, and they got deanonymized by Ben Pace's highly critical post, which had the likely effect (unless challenged) of stopping Nonlinear from doing its work, and of stigmatizing its leaders.

So, I don't understand the double standard, where those subject to false allegations don't enjoy anonymity, and those making the false allegations do get to enjoy anonymity.

So, I don't understand the double standard, where those subject to false allegations don't enjoy anonymity, and those making the false allegations do get to enjoy anonymity.

I don't think all people in the replies were arguing that Ben's initial post was okay and deanonymizing Alice and or Chloe would be bad (which I think you would call a double standard, which I'm not commenting on right now). Some probably do but some probably think that Ben's initial post was bad and that deanonymizing Alice and or Chloe would also be bad and that we shouldn't try to correct one bad with another bad, which doesn't look like a double standard to me.

A quick reminder that moderators have asked, at least for the time being, to please not post personal information that would deanonymize Alice or Chloe.

Lorenzo - yes, I'm complying with that request. 

I'm just puzzled about the apparent double standard where the first people to make allegations enjoy privacy & anonymity (even if their allegations seem to be largely false or exaggerated), but the people they're accusing don't enjoy the same privilege.

I agree that the Forum's rules and norms on privacy protection are confused. A few observations:

(1) Suppose a universe in which the first post on this topic had been from from Nonlinear, and had accused Alice and Chloe (by their real names) of a pattern of mendaciously spreading lies about Nonlinear. Would that post have been allowed to stay up? If yes, it is hard to come up with a principled reason why Alice and Chloe can't be named now. 

If no, we would need to think about why this hypothetical post would have been disallowed. The best argument I came with would be that Alice and Chloe are, as far as I know, people with no real prominence/influence/power ("PIP") within or without EA. Under this argument, there is a greater public interest in the actions of those with PIP, and accepting a role of PIP necessarily means sacrificing some of the privacy rights that non-PIPs get.

(2) Another possibility involves the idea of standing. Under this theory, Alice and Chloe had standing to name people at Nonlinear because they were the ones who allegedly experienced harm. Ben had derivative standing because Alice and Chloe had given him permission to share their stories on the Forum. Unde... (read more)

If yes, it is hard to come up with a principled reason why Alice and Chloe can't be named now.

I expect no one was interested in writing something about Alice and/or Chloe (A/C), by name or otherwise, before Ben's post, and people only want to name them now because they think A/C should face consequences for falsely (they believe) alleging abuse. Which is very close to retaliating against whistleblowers, and we should be very careful, which includes maybe accepting a rule that will have some false positives.

To take a different example, my non-professional understanding is it would normally be legal for an MA employer to report their employee to immigration authorities, but if the employer did this right after the employee had filed a complaint with the attorney general's office, even a false one, this is probably actually illegal retaliation. This will have the occasional false positive, where the employer really was going to report the employee anyway but can't prove it, but we accept that because avoiding the harms of retaliation is more important.

Jeff -- actual 'whistleblowers' make true and important allegations that withstand scrutiny and fact-checking. I agree that legit whistleblowers need the protection of anonymity. 

But not all disgruntled ex-employees with a beef against their former bosses are whistleblowers in this sense. Many are pursuing their own retaliation strategies, often turning trivial or imagined slights into huge subjective moral outrages -- and often getting credulous friends, family, journalists, or activists to support their cause and amplify their narrative.

It's true that most EAs had never heard of 'Alice' or 'Chloe', and didn't care about them, until they made public allegations against Nonlinear via Ben Pace's post. And then, months later, many of us were dismayed and angry that many of their allegations turned out to be fabricated or exaggerated -- harming Nonlinear, wasting thousands of hours of our time, and creating schisms within out community.

So, arguably, we have a case here of two disgruntled ex-employees retaliating against a former employer. Why should their retaliation be protected by anonymity?

Conversely, when Kat Woods debunked many of the claims of Ben Pace (someone with much mo... (read more)

So, arguably, we have a case here of two disgruntled ex-employees retaliating against a former employer. Why should their retaliation be protected by anonymity?

Highlighting that is an important crux (and one on which I have mixed feelings). Not all allegations of incorrect conduct rise to the level of "whistleblowing." A whistleblower brings alleged misconduct on a matter of public importance to light. We grant lots of protections in furtherance of that public interest, not out of regard for the whistleblower's private interests.

Is this a garden-variety dispute between an employer and two employees about terms of employment? Or is this a story about influential people allegedly using their power to mistreat two people who were in a vulnerable position which is of public import because it should update us on how much influence to allow those people?

In Australia people can be, and have been, prosecuted when they whistleblow on something commercially or otherwise sensitive (that was of major public importance!) by disclosing it publicly without completely exhausting internal whistleblowing processes. So even in cases of proper whistleblowing, countervailing factors can dominate in what the consequences are for whistleblower.

Jeff Kaufman
I think that's too strong? For example, under my amateur understanding of MA law I don't see anything about the anti-retailation provisions being conditional requiring a complaint to withstand scrutiny and fact-checking. And if this were changed to allow employers to retaliate in cases where employees claims were not sustained then I think we'd see, as a chilling effect, a decrease in employees raising true claims.

I agree that requiring that the claims be sustained would have a chilling effect. However, in many contexts, we don't extend protections to claims submitted in bad faith. For instance, we grant immunity from legal retaliation against people who file reports of child abuse . . . but that is usually conditioned on the allegations were made in good faith. If a reported individual can prove that the report was fabricated out of whole cloth, we don't shield the reporter from a defamation suit or other legal consequences. 

Note that this is generally a subjective standard -- if the reporter honestly believed the report was appropriate, we shield the reporter from liability. This doubtless allows some bad actors to slip through with immunity. However, we believe that is necessary to avoid reporters deciding not to report out of fear that someone will Monday-morning quarterback then and decide that reporting was objectively unreasonable.

In your example, I suspect that knowingly filing a false report with a state agency is a crime in MA (as it is with a federal agency at the federal level), so there is at least some potential enforcement mechanism for dealing with malicious lies.

I expect no one was interested in writing something about Alice and/or Chloe (A/C), by name or otherwise, before Ben's post [ . . . .]

(Correctly) surmising a lack of interest in writing a hypothetical expose about A/C isn't quite the same thing as reaching a conclusion that the post shouldn't have been allowed to remain. However, I think there is a lot of overlap between the two; the reasons for lack of interest seem similar to the arguments for why the post shouldn't be allowed. So I think we are both somewhere vaguely near "there would be no legitimate/plausible reason for someone to write an expose about A/C, unless one accepted that their whistleblowing activity made it legitimate."

One interesting thing about this framing is that it raises the possibility that the whistleblowers' identities are relevant to the decision. If a major figure in EA were going around telling malicious lies about other EAs, that would be (the subject of an appropriate post / something people would be interested in writing about) independently of any specifically whistleblowing-retaliation angle.

One could stake out an anti-standing argument in which Nonlinear et al. would not be able to identify A/C be... (read more)

Jeff Kaufman
All good points! I'm quite conflicted here.
Could you explain what “ core expressive speech” and “extra solicitude” are?
My use of core expressive speech was inspired by "core political speech" in U.S. First Amendment doctrine. E.g., this article describing a hierarchy of protected speech. I meant that Forum speech may be more likely to be speech about the stuff that matters, discouragement of which (including by denying psuedonymity) poses particularly great harms. Probably "high-value speech" would have been clearer here. Solicitude is care or concern, so here I meant that we might particularly care about protecting Forum speech as opposed to other kinds of speech for some reason.

Writing in a personal capacity.

Hi Geoffrey, I think you raise a very reasonable point.

There’s some unfortunate timing at play here: 3/7 of the active mod team—Lizka, Toby, and JP—have been away at a CEA retreat for the past ~week, and have thus mostly been offline. In my view, we would have ideally issued a proper update by now on the earlier notice: “For the time being, please do not post personal information that would deanonymize Alice or Chloe.”

In lieu of that, I’ll instead publish one of my comments from the moderators’ Slack thread, along with some commentary. I’m hoping that this shows some of our thinking and adds to the ongoing discussion here.[1] I’m grateful to yourself, @Ivy Mazzola, @Jason, @Jeff Kaufman and others for helping push this conversation forward.

Starting context: Majority of moderators in agreement that our current policy on doxing, “Revealing someone's real name if they are anonymous on the Forum or elsewhere on the internet is prohibited” (link), should apply in the Alice+Chloe case. (And that it should apply whether or not Alice and/or Chloe have exaggerated their allegations.)

Will (5 days ago)

I think what this comes down to for me is: If

... (read more)

I think what this comes down to for me is: If Kat Woods’ Forum username was pseudonymous, would we have taken down Ben’s post? (Or otherwise removed all references to Kat by her real name?)

If the answer to this is “yes,” then I don’t think Alice+Chloe should be deanonymized.


I do not like the incentive structure that this would create if adopted. Kat did not get to look at this particular drama and decide whether she wanted it discussed under a real or pseudonymous username. Her decision point was when she created her forum account however many years ago, at a time when she had no idea that this kind of drama would erupt. If this position becomes policy, then it incentivizes every person, at the time that they create a forum account, to choose a pseudonym rather than use their real name, to avoid having any unforeseeable future drama publicly associated with their real name. I think this would be bad. People in a community can't build trust if they don't know the identities of the people they are building trust with.

A rule that you couldn't directly name people of moderate or greater prominence wouldn't work well anyway. People here are awfully clever, and I'm sure one could easily write a whistleblowing piece on such a person that left very little doubt about their identity without actually saying their name or other unique identifiers. In fact, I'm not sure if Ben's piece could have been effectively written without most of the Forum readership knowing who Alice and Chloe had worked for.

Will - thanks very much for sharing your views, and some of the discussion amongst the EA Forum moderators.

These are tricky issues, and I'm glad to see that they're getting some serious attention, in terms of the relative costs, benefits, and risks of different possible politicies.

I'm also concerned about 'setting a precedent of first-mover advantage'. A blanket policy of first-mover (or first-accuser) anonymity would incentivize EAs to make lots of allegations before the people they're accusing could make counter-allegations. That seems likely to create massive problems, conflicts, toxicity, and schisms within EA. 

Thanks for sharing this!

I had a bunch of thoughts on this situation, enough that I wrote them up as a post. Unfortunately your response came out while I was writing and I didn't see it, but I think doesn't change much?

In addition to your three paths forward, I see a fourth one: you extend the policy to have the moderators (or another widely-trusted entity) make decisions on when there should be exceptions in cases like this, and write a bit about how you'll make those decisions.

There may be a fifth, which could be seen as a bit of a cop-out. 

It's not clear to me whether the mods claim jurisdiction over deanonymizing conduct that doesn't happen on the Forum. I think the answer here is that claiming such jurisdiction here would be inappropriate. 

As far as I know, it wouldn't violate the rules of X, Facebook, or most other sites to post that "[Real Names] have been spreading malicious lies about things that happened when they were Nonlinear employees." It certainly would not violate the rules of a state or federal court to do that in a court complaint. The alleged harm of Alice and Chloe spreading malicious lies about Kat, Emerson, and Nonlinear existed off-the-Forum prior to anything being published on the Forum. I don't see why Ben's act of including those allegations in a Forum post creates off-Forum obligations for Nonlinear et al. (or anyone else) that did not exist prior to Ben's post. Alice and Chloe, and people in similar situations, have to accept that many fora exist that do not have norms against this kind of conduct.[1]

If there is no jurisdiction over off-Forum naming here, it seems that the people who want Alice and Chloe named can do ... (read more)

Jeff Kaufman
This isn't about the Forum mods as as representatives of the Forum, but instead as the most obvious trusted community members (possibly in consultation with CH) to make a decision. What centralized adjudication avoids is each person having to make their own judgment about whether deanonymization is appropriate in a given circumstance. Let's say NL starts posting the real names on Twitter: should I think poorly of them for breaking an important norm or is this an exception? Is that an unreasonable unilateral escalation of this dispute? Should I pressure them not to do this?
That approach certainly does offer some significant advantages, but I think it's a lot harder to pull off.  Will's three options, the narrower version of mod discretion (which is limited to whether A/C can be named on the Forum), and my fifth option (declining to allow in this case because if people decide to name, everyone will find out whether it's on the Forum or not) are all open to the mods because they are mods.  The possibility of a centralized adjudication that is recognized as binding in all places requires outside buy-in. I think it needs either (1) the consent of every party directly in interest or (2) the consent of Nonlinear, broad community support, and the centralized adjudicator's willingness to either release the names themselves or allow widespread burner accounts naming them.  Option (1) is basically arbitration on the consent of the parties; they would be free to choose the mods, Qualy, a poll, or a coin flip. Alice and Chloe would consent to being named if the arbitrators chose, and Nonlinear would agree to not name if the arbitrators ruled against them.[1] If the arbitrators rule for naming, no one should judge Nonlinear because it would have named Alice and Chloe with their consent. If they rule against naming and Nonlinear did it anyway, everyone should judge them for breaking their agreement. And there's a strong argument to me that we bystanders should honor the decision of those directly involved on a resolution. But reaching an agreement to arbitrate may be challenging. A rational party would not consent to arbitrate unless it concluded its interests were expected to be better off under arbitration than the counterfactual. Settlements can be mutually beneficial, but I am not yet convinced arbitration would be in Alice and Chloe's interests. So long as a substantial fraction of the community would judge Nonlinear for naming, it probably will not do so. So the status quo for Alice and Chloe would be a win vs. an uncertain future in arbit
Jeff Kaufman
I think you might be thinking too formally? We sometimes have things that work because we decide to respect an authority that doesn't have any formal power. If you make a film you don't have to submit it to the MPAA to get a rating, and if you run a theater you don't have to follow MPAA ratings in deciding whether someone is mature enough to be let into an R-rated movie, but everyone just goes along with the system. I'm imagining that the Forum mods would make a decision for the Forum, and then we'd just go along with it voluntarily even off the Forum, as long as they kept making reasonable decisions.

I'm not seeing any real consensus on what standard to apply for deanonymizing someone. I think a voluntary deference model is much easier when such a consensus exists. If you're on board with the basic decision standard, it's easier to defer even when you disagree with the application in a specific case. In sports, the referees usually get the call right, and errors are evenly distributed between your team and your opponents. But if you fundamentally disagree with the decision standard, the calls will go systematically against your viewpoint. That's much harder to defer to, and people obviously have very strong feelings on either side.

I don't think the MPAA is a great analog here. I'd submit that the MPAA has designed its system carefully in light of the wholly advisory nature of its rulings. Placing things on a five-point continuum helps. I think only a small fraction of users would disagree more than one rating up/down from where the MPAA lands. So rarely would an end user completely disagree with the MPAA outcome. Where an end user knows that the MPAA grades more harshly/leniently than they do, the user can mentally adjust accordingly (as they might when they learn so many Harva... (read more)

Jeff - thanks very much for sharing the link to that post. I encourage others to read it - it's fairly short. It nicely sets out some of the difficulties around anonymity, doxxing, accusations, counter-accusations, etc.

I can't offer any brilliant solutions to these issues, but I am glad to see that the risks of false or exaggerated allegations are getting some serious attention.

I wouldn't classify Ben's post as containing fully anonymous allegations. There was a named community member who implicitly vouched for the allegations having enough substance to lay before the Forum community. That means there was someone in a position to accept social and legal fallout if the decision to post those allegations is proven to have been foolhardy. That seems to be a substantial safeguard against the posting of spurious nonsense. Maybe having such a person identified didn't work out here, but I think it's worth distinguishing between this case and a truly anonymous situation (e.g., burner account registered with throwaway account doing business via Tor, with low likelihood that even the legal system could identify the actual poster for imposition of consequences). That could be a feature rather than a bug for reasons similar to those described above. Deanonymizing someone who claims to be a whistleblower is a big deal -- and arguably we should require an identified poster to accept the potential social and legal fallout if that decision wasn't warranted, as a way of discouraging inappropriate deanonymization. 

Short answer: I think Ben should defer to the community health team as to whether to reveal identities to them or not (I'm guessing they know). And probably the community health team should take their names and add it to their list where orgs can ask CH about any potential hires and learn of red flags in their past. I think Alice should def be included on that list, and Chloe should maybe be included (that's the part I'd let the CH team decide if it is was bad enough). It's possible Alice should be revealed publicly, or maybe just revealed to community organizers in their locale and let them make the decision of how they want to handle Alice's event attendance and use of community resources.

Extra answer: FWIW I already have bad feelings about CEA's hardcore commitment to anonymity. I do feel EA is too hard on that side, where for example, people accused of things probably won't even be told the names of their accusers or any potentially-identifying details that make accusations less vague. The only reason NL knew in this situation is because the details make it unavoidable that they'd know. But otherwise the standard across EA is that if you are accused of something via CEA's commu... (read more)

Ivy - I really appreciate your long, thoughtful comment here. It's exactly the sort of discussion I was hoping to spark. 

I resonate to many of your conflicted feelings about these ethically complicated situations, given the many 'stakeholders' involved, and the many ways we can get our policies wrong in all kinds of ways.

Ivy Mazzola
Thanks for your kind comment :)

My understanding is that Kat and Emerson did in fact get their names on CEA's blacklist to some extent.

Here is the bigger problem I see with your proposed solution. If an employer reviewing an application from Alice or Chloe believes their side of this, then the employer would not factor in the fact of their presence on CEA's blacklist, since the employer, by hypothesis, thinks CEA was mistaken to put them there. If, on the other hand, an employer reviewing an application from Alice or Chloe believes Nonliner's side of this, then the employer may justifiably look at the fact that CEA erred by having blacklisted Kat and Emerson and choose not to consult CEA in their hiring decisions at all, and therefor not discover that their applicant was Alice or Chloe. Either way, CEA blacklisting Alice and Chloe seems ineffective.

There are some references here to the community health team’s practices that we think aren’t fully accurate. You can see more here about how we typically handle situations where we hear an accusation (or multiple accusations) and don’t have permission to discuss it with the accused.

Sorry, but I have (re)read that link and I don’t see how anything we said was in conflict with each other. Perhaps I didn’t word it well. Or am I misunderstanding you? If you could give some hard numbers like, only X% of complaints end up being handled anonymously, and of those, in Z% the complaints end up being unactionable and we just give a listening ear, and only in Y% do anonymous complaints end up being held against the person and meaningfully effecting their lives, then maybe I can agree I made the extent of the dilemma sound overblown. I’m also aware that other tactics come with their own dilemmas. I just wanted to acknowledge that there is a dilemma and that I am not a “never deanonymize” type of person before I made some other points.

Reading your link I felt it was not in conflict because: In the case where many people give complaints about Steve, not a single person was willing to have their concerns discussed in detail with him (out of fear that details would reveal them I suppose), let alone be deanonymized by name. So it does sound like EAs like to make complaints in (what I’d call) “extreme anonymity” by “default” and tbh that matches my social and cultural model of ... (read more)

You make a lot of fair points here, and we've grappled with these questions a lot.
Well the first thing that stands out to me is you don’t specify that the anonymity occurs only if the complainant requests it
Ivy Mazzola
Hm I guess that’s true. I guess I thought it went without saying that it would be when people want anonymity, I didn’t imagine there could be an alternative where CH removes names even if the complainant doesn’t request it. That would indeed be worse and a true “default” and I hope no one took that as what I meant. But I think CH asks complainants what degree of anonymity and detail-sharing they are comfortable with by default. And I think a lot of people ask them to not give details, and by default CH does defer to that preference to what might be an abnormal extent, such that anonymity may be functionally the default in our culture and their dealings. But yeah I guess I wonder about hard numbers. It is striking to me that not one person was willing to have the details of the incident shared with Steve though
I assumed the mock-incident was just meant to illustrate how it might arise that someone doesn’t get full information, and it’s easier to get that point across if you have it as everyone requesting anonymity. On the real world point, I do agree that if what happens is something like ‘CEA: do you want anonymity? Complainant: uh sure, might as well’, then that seems suboptimal. Though I’m not sure I could come up with any system that’s better overall.
Ivy Mazzola
Fair, that is a mock incident, but I don’t see that aspect as being dramatized or anything. Fwiw I have known multiple people whose experiences basically matched Steve’s. I just think if we are going to talk about doxxing Alice and Chloe we might want to think what it might look like if they had gone elsewhere, or what it might look like in the future if they unduly report others. And as a community I think it must be reckoned with why some people feel upset right now at the protection that reporters face when accused get so few protections, not even the protection to know details of the claims against them. And a cultural standard where names of people who make provably false accusations are revealed could protect all of us. So I think it is worth reckoning with. Even though I came out supporting non-doxxing in this case
I think it’s important to separate out how CH handled the allegations vs how Ben did. IMO CH’s actions (banning presenting at EAG but not attending, recommending a contract be used) were quite measured, and of a completely different magnitude than making public anonymous allegations. And I think this whole situation would have been significantly improved if Ben had adopted CEA’s policy of not taking further actions if restrictions are requested.

PS For the people downvoting and disagree-voting on my comment here:

I raised some awkward questions, without offering any answers, conclusions, or recommendations.

Are you disagreeing that it's even legitimate to raise any issues about the ethics of 'whistleblower' anonymity in cases of potential false allegations?

I'd really like to understand what you're disagreeing about here.

I think the questions you're raising are important. I got kind of triggered by the issue I pointed out (and the fact that it's something that has already been discussed in the comments of the other post), so I downvoted the comment overall. (Also, just because Chloe is currently anonymous doesn't mean it's risk-free to imply misleading and damaging things about her – anonymity can be fragile.)

There were many parts of your comment that I agree with. I agree that we probably shouldn't have a norm that guarantees anonymity unconditionally. (But the anonymity protection needs to be strong enough that, if someone temporarily riles up public sentiment against the whistleblowers, people won't jump to de-anonymizing [or to other, perhaps more targeted/discreet appropriate measures, such as the one suggested by Ivy here] too quickly; instead, the process there should be diligent and fair as well, just like an initial investigation prompted by the whistleblowers should be. (Not saying that this contradicts any of what you were suggesting!)

When things get heated and people downvote each others comments, it might be good to focus on things we do (probably) agree on. As I said on the Lesswrong ... (read more)

I raised some awkward questions, without offering any answers, conclusions, or recommendations.

I don't feel like you raised discussion with no preference for what the community decided. When I gave my answer, which many people seem to agree with, your response was to question whether that's REALLY what the EA community wants. I think it's a bit disingenuous to suggest that you're just asking a question when you clearly have a preference for how people answer!

Rafael Harth
I disagree-voted because your first paragraph praised the OP.
I'll respond to one aspect you raised that I think might be more significant than you realize. I'll paint a black and white picture just for brevity. If you're running organizations and do so for several years with dozens of employees across time, you will make poor hiring decisions at one time or another. While making a bad hire seems bad, avoiding this risk at all costs is probably a far inferior strategy. If making a bad hire doesn't get in the way of success and doing good, does it even make sense to fixate on it? Also, if you're blind to the signs before it happens, then you reap the consequences, learn an expensive lesson, and are less likely to make it in future, at least for that type of deficit in judgment. Sometimes the signs are obvious after having made an error, though occasionally the signs are so well hidden that anyone with better judgment than you could have still have made the same mistake. The underlying theme I'm getting at is that embracing mistakes and imperfection is instrumental. Although many EAs might wish that we could all just get hard things right the first time all the time, that's not realistic. We're flawed human beings and respecting the fact of our limitations is far more practical than giving into fear and anxiety about not having ultimate control and predictability. If anything, being willing to make mistakes is both rational and productive compared to other alternatives.

Victor - this is total victim-blaming. Good people trying to hire good workers for their organizations can be exploited and ruined by bad employees, just as much as good employees can be exploited and ruined by bad employers. 

You said 'If making a bad hire doesn't get in the way of success and doing good, does it even make sense to fixate on it?'

Well, we've just seen an example of two very bad hires ('Alice' and 'Chloe') almost ruin an organization permanently. They very much got in the way of success and doing good. I would not wish their personalities on any other employers. Why would you?  

We shouldn't 'embrace mistakes' if we can avoid them. And keeping bad workers anonymous is a way of passing along those hiring mistakes to other future employers without any consideration for the suffering and chaos that those bad workers are likely to impose, yet again.

What I think I'm hearing from you (and please correct me if I'm not hearing you) is that you feel conflicted by the thought that the efforts of good people with good intentions can be so easily be undone, and that you wish there were some concrete ways to prevent this happening to organizations, both individually and systemically. I hear you on thinking about how things could work better as a system/process/community in this context. (My response won't go into this systems level, not because it's not important, but because I don't have anything useful to offer you right now.) I acknowledge your two examples ("Alice and Chloe almost ruined an organization) and (keeping bad workers anonymous has negative consequences). I'm not trying to dispute these or convince you that you're wrong. What I am trying to highlight is that there is a way to think about these that doesn't involve requiring us to never make small mistakes with big consequences. I'm talking about a mindset, which isn't a matter of right or wrong, but simply a mental model that one can choose to apply. I'm asking you to stash away your being right and whatever you perspective you think I hold for a moment and do a thought experiment for 60 seconds. At t=0, it looks like ex-employee A, with some influential help, managed to inspire significant online backlash against organization X led by well-intentioned employer Z. It could easily look like Z's project is done, their reputation is forever tarnished, their options have been severely constrained. Z might well feel that way themselves. Z is a person with good intentions, conviction, strong ambitions, interpersonal skills, and a good work ethic. Suppose that organization X got dismantled at t=1 year. Imagine Z's "default trajectory" extending into t=2 years. What is Z up to now? Do you think they still feel exactly the way they did at t=0? At t=10, is Z successful? Did the events of t=0 really ruin their potential at the time? At t=40, what might Z say reca

Victor - thanks for elaborating on your views, and developing this sort of 'career longtermist' thought experiment. I did it, and did take it seriously.


I've known many, many academics, researchers, writers, etc who have been 'cancelled' by online mobs, who have made mountains out of molehills. In many cases, the reputations, careers, and prospects of the cancelled people are ruined. Which is, of course, the whole point of cancelling them -- to silence them, to ostracize them, and to keep them from having any public influence.

In some cases, the cancelled people bounce back, or pivot, or pursue other interests. But in most cases, they cancellation is simply a tragedy, a huge setback, a ruinous misfortune, and a serious waste of their talents and potential. 

Sometimes there's a silver lining to their being cancelled, bullied, and ostracized, but mostly not. Bad things can happen to good people, and the good people do not always recover.

So, I think it's very important for EA to consider the many serious costs and risks we would face if we don't take seriously the challenge of minimizing false allegations against EA organizations and EA people.

Thanks for entertaining my thought experiment, and I'm glad because I better understand your perspective too now, and I think I'm in full agreement with your response. A shift of topic content here, feel free to not engage if this doesn't interest you. To share some vague thoughts about how things could be different. I think that posts which are structurally equivalent to a hit piece can be considered against the forum rules, either implicitly already or explicitly. Moderators could intervene before most of the damage is done. I think that policing this isn't as subjective as one might fear, and that certain criteria can be checked even without any assumptions about truthfulness or intentions. Maybe an LLM could work for flagging high-risk posts for moderators to review. Another angle would be to try and shape discussion norms or attitudes. There might not be a reliable way to influence this space, but one could try for example by providing the right material that would better equip readers to have better online discussions in general as well as recognize unhelpful/manipulative writing. It could become a popular staple much like I think "Replacing Guilt" is very well regarded. Funnily enough, I have been collating a list of green/orange/red flags in online discussions for other educational reasons. "Attitudes" might be way too subjective/varied to shape, whereas I believe "good discussion norms" can be presented in a concrete way that isn't inflexibly limiting. NVC comes to mind as a concrete framework, and I am of the opinion that the original "sharing information" post can be considered violent communication.
Jeff Kaufman
What does this mean?
a piece of writing with most of the stereotypical properties of a hit piece, regardless of the intention behind it
Jeff Kaufman
Do you think Concerns with Intentional Insights should have been ineligible for the Forum under this standard?
I've just partly read and partly skim read that post for the first time. I do suspect that post would be ineligible under a hypothetical "no hit pieces under duck typing" rule. I'll refer to posts like this as DTHP to express my view more generally. (I have no comment on whether it "should" have been allowed or not allowed in the past or what the past/current Forum standards are.) I've not thought much about this, but the direction of my current view is that there are more constructive ways of expression than DTHPs, and here I'll vaguely describe three alternatives that I suspect would be more useful. By useful I mean that these alternatives potentially promote better social outcomes within the community, while hopefully not significantly undermining desirable practical outcomes such as a shift in funding or priorities. 1. If nothing else, add emotional honesty to the framing of a DTHP. A DTHP becomes more constructive and less prone to inspire reader bias when they are introduced with a clear and honest statement of the needs, feelings, requests from the main author. Maybe two out of three is a good enough bar. I'm inclined to think that the NL DTHP failed spectacularly at this. 2. Post a personal invitation for relevant individuals to learn more. Something like "I believe org X is operating in an undesirable way and would urge funders who might otherwise consider donating to X to consider carefully. If you're in this category, I'm happy to have a one on one call and to share my reasons why I don't encourage donating to X." (And during the one on one you can allude to the mountain of evidence you've gathered, and let someone decide whether they want to see it or not.) 3. Find ways to skirt around what makes a DTHP a DTHP. I think a simple alternative such as posting a DTHP verbatim to one's personal blog, then only sharing or linking to it with people on a personal level is already incrementally less socially harmful than posting it to the forums. Option 4 is we
Jeff Kaufman
I'm biased since I worked on that post, but I think of it as very carefully done and strongly beneficial in its effect, and I think it would be quite bad if similar ones were not allowed on the forum. So I see your proposed DTHP rule as not really capturing what we care about: if a post shares a lot of negative information, as long as it is appropriately fair and careful I think it can be quite a positive contribution here.
I appreciate your perspective, and FWIW I have no immediate concerns about the accuracy of your investigation or the wording of your post. Correct me if I'm wrong: you would like any proposed change in rules or norms to still support what you tried to achieve in that post, which is provide accurate information, presented fairly, and hopefully leading people to update in a way that leads to better decision making. I support this, I agree that it's important to have some kind of channel to address the kinds of concerns you raised, and I probably would have seen your post as a positive contribution (had I read it and been a part of EA / etc back then but I'm not aware of the full context), and simultaneously I'm saying things like your post could have even better outcomes with a little bit of additional effort/adjustment in the writing. I encourage you think about my proposed alternatives not as being blockers to this kind of positive contribution. That is not their intended purpose. As an example, if a DTHP rule allows DTHPs but requires a compulsory disclosure at the top addressing the relevant needs, feelings, requests of the writer, I don't think this particularly bars contributions from happening, and I think it would also serve to 1) save time for the writer by reflecting on their underlying purpose for writing, and 2) dampen certain harmful biases that a reader is likely to experience from a traditional hit piece. If such a rule existed back then, presumably you would have taken it into account during writing. If you visualize what you would have done in that situation, do you think the rule would have negatively impacted 1) what you set out to express in your post and 2) the downstream effects of your post?

This post seems premature to me (edit: which I recognize might seem in conflict with my defense of Ben not giving Nonlinear more time to respond before publication, and am happy to go into why I hold both of these positions after Ben published his response).

In-particular the section 'Avoidable, Unambiguous Falsehoods' contains mostly claims that are, to the best of my knowledge, not actually falsehoods, but are correct. And that section of the post seems quite load-bearing (given that the central case of the post relies on spreading false and/or misleading information about Nonlinear, and the case for an absence of due diligence).

Ben is working on a response, and given that I think it's clearly the right call to wait a week or two until we have another round of counter-evidence before jumping to conclusions. If in a week or two people still think the section of "Avoidable, Unambiguous falsehoods" does indeed contain such things, then I think an analysis like this makes sense, and people can spend time thinking through the implications of that (I disagree with various other parts of the post, but I think mine and Ben's time is best spent engaging with Nonlinear's post and not gettin... (read more)

Since the time I have started looking into this, you have: