J

Jason

14437 karmaJoined Nov 2022Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· 1y ago · 1m read

Comments
1603

Topic contributions
2

Unclear, although most nonprofits are attracting significantly less risky donors than crypto people. (SBF wasn't even the first crypto scammer sentenced to a multidecade term in the Southern District of New York in the past twelve months....)

I'd suggest that even to the extent a non-profit is generally outsourcing that kind of work, it can't just rely on standard third-party practices where significant information with some indicia of reliability is brought directly to it.

At least where the acceptance rate is 3-5 percent, it seems plausible that there could be something like the "AI Safety Common Pre-Application" that would reduce the time burden for many applicants. In many cases it would seem possible to say, on information not customized to a specific program, that an applicant just isn't going to make that top 3-5%.

(Applicants meeting specified criteria would presumably be invited to skip the pre-app stage, eliminating the risk of those applicants being erroneously screened out on common information.)

By analogy: In some courts, you have to seek permission from the court of appeals prior to appealing. The bar for being allowed is much lower than for succeeding, which means that denials at permission stage save disappointed litigants the resources they'd otherwise use to prepare full appeals.

This is an extremely rich guy who isn't donating any of his money.

But cf. the "stages of change" in the transtheoretical model of behavior change. A lack of action suggests he has not reached the action stage, but could be in the contemplation or preparation stages.

Moreover, even if a critic has a sufficiently high level of motivation in the abstract, it doesn't follow that they will be incentivized to produce much (if any) "polite, charitable, good-faith, evidentiarily rigorous" work. (Many) critics want to be effective too -- and they may reasonably (maybe even correctly!) think that effort devoted to producing castle memes produces a higher ROI than polishing, simplifying, promoting, and defending their more rigorous critiques. 

For example, a committed e/acc's top priority is arguably the avoidance of government regulation that seriously slows down AI development. Memes are more important for 90%, perhaps 99%, of the electorate -- so "make EA / AI safety a topic of public scorn and ridicule" seems like a reasonable theory of change for the e/acc folks. When you're mainly trying to tear someone else's work down, you may plausibly see maintaining epistemic rigor in your own camp as relatively less important than if you were actually trying to build something.

I think the fitness/suitability of major leaders (at least to the extent we are talking about a time when SBF was on the board) and major donor acceptability evaluation are inherently in scope for any charitable organization or movement.

Do you recall what your conception of a possible customer loss resulting "from bankruptcy" was, and in particular whether it was (at least largely) limited to "monies lent out for margin trading"? Although I haven't done any research, if user accounts had been appropriately segregated and safeguarded, FTX's creditors (in a hypothetical "normal" bankruptcy scenario) shouldn't have been able to make claims against them. There might have been an exception for those involved in margin trading

Jason
2d12
1
0
1

This is a pretty opposite approach to the EA forum which favours bans.

If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here.

As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum.

In contrast (although I am not an LW user or a member of the broader rationality community), it seems to me that the LW forum doesn't have this particular relationship to a real-world community. One could say that the LW forum is the official online instantiation of the LessWrong community (which is not limited to being an online community, but that's a major part of it). In that case, we have something somewhat like the (made-up) Roman Catholic Forum (RCF) that is moderated by designees of the Pope. Since the Pope is the authoritative source on what makes something legitimately Roman Catholic, it's appropriate for his designees to employ a heavier hand in deciding what posts and posters are in or out of bounds at the RCF. But CEA/EVF have -- rightfully -- mostly disowned any idea that they (or any other specific entity) decide what is or isn't a valid or correct way to practice effective altruism.

One could also say that the LW forum is an online instantiation of the broader rationality community. That would be somewhat akin to John and Jane's (made up) Baptist Forum (JJBF) that is moderated by John and Jane. One of the core tenets of Baptist polity is that there are no centralized, authoritative arbiters of faith and practice. So JJBF is just one of many places that Baptists and their critics can go to discuss Baptist topics. It's appropriate for John and Jane to to employ a heavier hand in deciding what posts and posters are in or out of bounds at the JJBF because there are plenty of other, similar places for them to go. JJBF isn't anything special. But as noted above, that isn't really true of the EA Forum because of its ~semi-official status in a real-world social movement.

It's ironic that -- in my mind -- either a broader or narrower conception of what LW is would justify tighter content-based moderation practices, while those are harder to justify in the in-between place that the EA Forum occupies. I think the mods here do a good job handling this awkward place for the most part by enforcing viewpoint-neutral rules like civility and letting the community manage most things through the semi-democratic karma method (although I would be somewhat more willing to remove certain content than they are).

Ben said "any of the resultant harms," so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by "the affiliation with SBF" -- I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more).

To be clear, I do not think the "best case scenario" story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive to a view that SBF-related harms were largely inevitable. 

In this scenario, leaders recognized after the 2018 Alameda situation that SBF was just too untrustworthy and possibly fraudulent (albeit against investors) to deal with -- at least absent some safeguards (a competent CFO, no lawyers who were implicated in past shady poker-site scandals, first-rate and comprehensive auditors). Maybe SBF wasn't too far gone at this point -- he hadn't even created FTX in mid-2018 -- and a costly signal from EA leaders (we won't take your money) would have turned him -- or at least some of his key lieutenants -- away from the path he went down? Let's assume not, though.  

If SBF declined those safeguards, most orgs decline to take his money and certainly don't put him on podcasts. (Remember that, at least as of 2018, it sounds like people thought Alameda was going nowhere -- so the motivation to go against consensus and take SBF money is much weaker at first.) Word gets down to the rank-and-file that SBF is not aligned, likely depriving him of some of his FTX workforce. Major EA orgs take legible action to document that he is not in good standing with them, or adopt a public donor-acceptability policy that contains conditions they know he can't/won't meet. Major EA leaders do not work for or advise the FTXFF when/if it forms. 

When FTX explodes, the comment from major EA orgs is that they were not fully convinced he was trustworthy and cut off ties from him when that came to light. There's no statutory inquiry into EVF, and no real media story here. SBF is retrospectively seen as an ~apostate who was largely rejected by the community when he showed his true colors, despite the big $$ he had to offer, who continued to claim affiliation with EA for reputational cover. (Or maybe he would have gotten his feelings hurt and started the FTX Children's Hospital Fund to launder his reputation? Not very likely.)

A more modest mitigation possibility focuses more on EVF, Will, and Nick. In this scenario, at least EVF doesn't take SBF's money. He isn't mentioned on podcasts. Hopefully, Will and Nick don't work with FTXFF, or if they do they clearly disaffiliate from EVF first. I'd characterize this scenario as limiting the affiliation with SBF by not having what is (rightly or wrongly) seen as EA's flagship organization and its board members risk lending credibility to him. In this scenario, the media narrative is significantly milder -- it's much harder to write a juicy narrative about FTXFF funding various smaller organizations, and without the ability to use Will's involvement with SBF as a unifying theme. Moreover, when FTX explodes in this scenario, EVF is not paralyzed in the same way it was in the actual scenario. It doesn't have a CC investigation, ~$30MM clawback exposure, multiple recused board members, or other fires of its own to put out. It is able to effectively lead/coordinate the movement through a crisis in a way that it wasn't (and arguably still isn't) able to due to its own entanglement. That's hardly avoiding all the harms involved in affiliation with SBF . . . but I'd argue it is a meaningful reduction.

The broader idea there is that it is particularly important to isolate certain parts of the EA ecosystem from the influence of low-trustworthiness donors, crypto influence, etc. This runs broader than the specific examples above. For instance, it was not good to have an organization with community-health responsibilities like EVF funded in significant part by a donor who was seen as low-trustworthiness, or one who was significantly more likely to be the subject of whistleblowing than the median donor.

Is the better reference class "two-year old startups" or "companies supposedly worth over $10B" or "startups with over a billion invested"? I assume a 100 percent investor loss would be rare, on an annualized basis, in the latter two -- but was included in the original claim. Most two-year startups don't have nearly the amount of investor money on board that FTX did.

Optics would be great on that one -- an EA has insight that there's a good chance of FTX collapse (based on not generally-known info / rumors?), goes out and shorts SamCoins to profit on the collapse! Recall that any FTX collapse would gut the FTT token at least, so there would still be big customer losses.

Load more