J

Jason

14399 karmaJoined Nov 2022Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· 1y ago · 1m read

Comments
1597

Topic contributions
2

This is a pretty opposite approach to the EA forum which favours bans.

If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here.

As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum.

In contrast (although I am not an LW user or a member of the broader rationality community), it seems to me that the LW forum doesn't have this particular relationship to a real-world community. One could say that the LW forum is the official online instantiation of the LessWrong community (which is not limited to being an online community, but that's a major part of it). In that case, we have something somewhat like the (made-up) Roman Catholic Forum (RCF) that is moderated by designees of the Pope. Since the Pope is the authoritative source on what makes something legitimately Roman Catholic, it's appropriate for his designees to employ a heavier hand in deciding what posts and posters are in or out of bounds at the RCF. But CEA/EVF have -- rightfully -- mostly disowned any idea that they (or any other specific entity) decide what is or isn't a valid or correct way to practice effective altruism.

One could also say that the LW forum is an online instantiation of the broader rationality community. That would be somewhat akin to John and Jane's (made up) Baptist Forum (JJBF) that is moderated by John and Jane. One of the core tenets of Baptist polity is that there are no centralized, authoritative arbiters of faith and practice. So JJBF is just one of many places that Baptists and their critics can go to discuss Baptist topics. It's appropriate for John and Jane to to employ a heavier hand in deciding what posts and posters are in or out of bounds at the JJBF because there are plenty of other, similar places for them to go. JJBF isn't anything special. But as noted above, that isn't really true of the EA Forum because of its ~semi-official status in a real-world social movement.

It's ironic that -- in my mind -- either a broader or narrower conception of what LW is would justify tighter content-based moderation practices, while those are harder to justify in the in-between place that the EA Forum occupies. I think the mods here do a good job handling this awkward place for the most part by enforcing viewpoint-neutral rules like civility and letting the community manage most things through the semi-democratic karma method (although I would be somewhat more willing to remove certain content than they are).

Ben said "any of the resultant harms," so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by "the affiliation with SBF" -- I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more).

To be clear, I do not think the "best case scenario" story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive to a view that SBF-related harms were largely inevitable. 

In this scenario, leaders recognized after the 2018 Alameda situation that SBF was just too untrustworthy and possibly fraudulent (albeit against investors) to deal with -- at least absent some safeguards (a competent CFO, no lawyers who were implicated in past shady poker-site scandals, first-rate and comprehensive auditors). Maybe SBF wasn't too far gone at this point -- he hadn't even created FTX in mid-2018 -- and a costly signal from EA leaders (we won't take your money) would have turned him -- or at least some of his key lieutenants -- away from the path he went down? Let's assume not, though.  

If SBF declined those safeguards, most orgs decline to take his money and certainly don't put him on podcasts. (Remember that, at least as of 2018, it sounds like people thought Alameda was going nowhere -- so the motivation to go against consensus and take SBF money is much weaker at first.) Word gets down to the rank-and-file that SBF is not aligned, likely depriving him of some of his FTX workforce. Major EA orgs take legible action to document that he is not in good standing with them, or adopt a public donor-acceptability policy that contains conditions they know he can't/won't meet. Major EA leaders do not work for or advise the FTXFF when/if it forms. 

When FTX explodes, the comment from major EA orgs is that they were not fully convinced he was trustworthy and cut off ties from him when that came to light. There's no statutory inquiry into EVF, and no real media story here. SBF is retrospectively seen as an ~apostate who was largely rejected by the community when he showed his true colors, despite the big $$ he had to offer, who continued to claim affiliation with EA for reputational cover. (Or maybe he would have gotten his feelings hurt and started the FTX Children's Hospital Fund to launder his reputation? Not very likely.)

A more modest mitigation possibility focuses more on EVF, Will, and Nick. In this scenario, at least EVF doesn't take SBF's money. He isn't mentioned on podcasts. Hopefully, Will and Nick don't work with FTXFF, or if they do they clearly disaffiliate from EVF first. I'd characterize this scenario as limiting the affiliation with SBF by not having what is (rightly or wrongly) seen as EA's flagship organization and its board members risk lending credibility to him. In this scenario, the media narrative is significantly milder -- it's much harder to write a juicy narrative about FTXFF funding various smaller organizations, and without the ability to use Will's involvement with SBF as a unifying theme. Moreover, when FTX explodes in this scenario, EVF is not paralyzed in the same way it was in the actual scenario. It doesn't have a CC investigation, ~$30MM clawback exposure, multiple recused board members, or other fires of its own to put out. It is able to effectively lead/coordinate the movement through a crisis in a way that it wasn't (and arguably still isn't) able to due to its own entanglement. That's hardly avoiding all the harms involved in affiliation with SBF . . . but I'd argue it is a meaningful reduction.

The broader idea there is that it is particularly important to isolate certain parts of the EA ecosystem from the influence of low-trustworthiness donors, crypto influence, etc. This runs broader than the specific examples above. For instance, it was not good to have an organization with community-health responsibilities like EVF funded in significant part by a donor who was seen as low-trustworthiness, or one who was significantly more likely to be the subject of whistleblowing than the median donor.

Is the better reference class "two-year old startups" or "companies supposedly worth over $10B" or "startups with over a billion invested"? I assume a 100 percent investor loss would be rare, on an annualized basis, in the latter two -- but was included in the original claim. Most two-year startups don't have nearly the amount of investor money on board that FTX did.

Optics would be great on that one -- an EA has insight that there's a good chance of FTX collapse (based on not generally-known info / rumors?), goes out and shorts SamCoins to profit on the collapse! Recall that any FTX collapse would gut the FTT token at least, so there would still be big customer losses.

Jason
2d29
1
2
1

much more media reporting on the EA-FTX association resulting in significantly greater brand damage?

Most likely concern in my eyes. 

The media tends to report on lawsuits when they are filed, at which time they merely contain unsubstantiated allegations and the defendant is less likely to comment. It's unlikely that the media would report on the dismissal of a suit, especially if it was for reasons seen as somewhat technical rather than as a clear vindication of the EA individual/organization.

Moreover, it is pretty likely to me that EVF or other EA-affiliated entities have information they would be embarrassed to come out in discovery. This is not based on any belief about misconduct, but the base rate that organizations that had a bad miss / messup have information related thereunto that they would be embarrassed about (and my characterization of a bad miss / messup here, whether or not a liability-creating one).

If a sufficiently motivated plaintiff sued, and came up with a legal theory that survived a motion to dismiss, I think it fairly likely that embarrassing information would need to be disclosed in discovery. They could require various persons and organizations to answer questions, under oath, that they would rather not answer. Questions from a hostile examiner motivated to uncover damaging information, not a sympathetic podcaster. While "I don't remember" is usually an acceptable answer, it also can make the other side's evidence uncontested if they have anything on point.

For purposes of the next two sentences, "a sufficient basis to believe" means enough that a court would likely allow a good deal of digging if the matter was related or even adjacent to something that was material for purposes of the specific litigation. There's a sufficient basis to believe that EA leadership may have had good reasons to believe SBF had committed fraud against Alameda investors.[1] There is a sufficient basis to believe that EA PR people were aware of SBF-related risk and were actively working on the topic.[2] The plaintiff could also expand the scope of discovery as previously-discovered information warranted. 

If the case didn't settle before summary-judgment motions, the juicy bits would be all laid out in the plaintiff's motion, open to public view.

Prompting the legal system into investigating potential EA involvement in the FTX fraud, costing enormous further staff time despite not finding anything?

This seems rather unlikely. The FTX debtor entity is cooperating with the feds. DOJ has several ex-insiders who are singing like canaries, who have good lawyers, and who know that the more people they help the feds convict, the better things will be for their sentences. If there were reasons for the feds to be looking at potential EA involvement in the FTX fraud, it is almost certain the feds would know that at this point without any help from EA sources. Moreover, the FTX or ex-insider information would likely be enough to get the necessary search warrants, wiretaps, etc. 

There is of course also, as Will's note implies, the distraction/expense/angst/etc. of dealing with litigation, whether or not it ultimately has any merit. That would justify giving some weight to whether a disclosure increases the risk of any lawsuit, independent of any merit or concerns about external adverse effects like publicity. However, in my mind that goes both ways! I'd affirmatively want to disclose most information that makes would-be plaintiffs less likely to sue me. If one's prior is that conditioned on X being not-true, there's a 75% chance I would specifically deny X for litigation-avoidance reasons, then one can update on the fact that X hasn't been denied.

  1. ^

    Although the Time article doesn't specify exactly what information was shared with EA leadership, it does indicate that an Alameda exile told Time that SBF "didn’t have a distinction between firm capital and trading capital. It was all one pool.” That's at least a badge of fraud (commingling). The exiles accused SBF of various things, including “'willful and knowing violations of agreements or obligations, particularly with regards to creditors'—all language that echoes the U.S. criminal code." The document alleges that SBF was “misreporting numbers” and “failing to update investors on poor performance.” Continuing: "The team 'didn’t trust Sam to be in investor meetings alone,' colleagues wrote. 'Sam will lie, and distort the truth for his own gain,' the document says." Lying to investors is pretty much diagnostic of fraud.

  2. ^

    The New Yorker, quoting an unnamed participant on a leadership slack channel: “I guess my point in sharing this is to raise awareness that a) in some circles SBF’s reputation is very bad b) in some circles SBF’s reputation is closely tied to EA, and c) there’s some chance SBF’s reputation gets much, much worse. But I don’t have any data on these (particularly c, I have no idea what types of scenarios are likely), though it seems like a major PR vulnerability. I imagine people working full-time on PR are aware of this and actively working to mitigate it, but it seemed worth passing on if not since many people may not be having these types of interactions.” 

Could you say more about that? I suggest that "substantial fraction" may mean something quite different in the context of a bank than here. In the scenario I described, the hypothetical exchange would need to see 80-90% of deposits demanded back in a world where the stocks/bonds had to be sold at a 25-50% loss. It could be higher if the exchange had come up with an opt-in lending program that provided adequate cover for not returning (say) 10-15% of the customers' funds on demand.

I'd also suggest that the "simple loss of confidence snowballing" in modern bank runs is often justified based on publicly-known (or discernable) information. I don't think it was a secret that SVB had bought a bunch of long-term Treasuries that sank in value as interest rates increased, and thus that it did not have the asset value to honor 100% of withdrawals. It wasn't a secret in ~2008 that banks' ability to honor 100% withdrawals was based on highly overstated values for mortgage-backed securities.

In contrast, as long as the secret stock/bond purchases remained unknown to outsiders, a massive demand for deposits back would have to occur in the absence of that kind of information. Unlike the traditional banking sector, other places to hold crypto carry risks as well -- even self-custody, which poses risks from hacking, hardware failure, forgetting information, etc. So people aren't going to withdraw unless, at a minimum, convinced that they had a safer place to hold their assets.

Finally, in conducting the cost/benefit analysis, the hypothetical SBF would consider that the potential failure mode only existed in scenarios where 80-90%+ of deposits had been demanded back. Conditional on that having happened, the exchange's value would likely be largely lost anyway. So the difference in those scenarios would be between ~0 and the negative effects of a smaller-scale fraud. If the hypothetical SBF thought the 80-90%+ scenario was pretty unlikely . . . .

(Again, all of this does not include the risk of the fraud leaking out or being discovered.)

Jason
2d48
11
0

I have very little doubt that any advice given to an individual with significant potential exposure to keep their mouths shut was correct advice as to that individual's personal interests. I also have very little doubt that anyone who worked for or formally advised FTXFF fits in that category.

To the extent that Nathan is asking about legal advice given to EVF, I don't think the principle would necessarily hold. Legal advice is going to focus relatively more on the client's legal risks, and less so (if at all) on the traditionally-conceived public interest, what is in the interest of the long-term future, etc. I'd say "charitable organizations should act in their own legal self-interest" probably defaults to true, but that it's a fairly weak presumption. With the possible and partial exception of lawyers who are also insiders, I think lawyers will significantly underweight considerations like the epistemic health of the broader EA community and also be seriously limited at estimating the effect of various scenarios on that consideration.

That being said, I doubt Will is in a particularly good position to evaluate the legal advice given to EVF because he was recused from FTX-related stuff due to serious conflicts of interests. If he were a lawyer, he might be in a good position to estimate -- then he'd have both enough knowledge of facts and the right professional background to infer stuff based on that knowledge. But he isn't.

When I looked at past CC actions, I didn't get the impression that they were in the habit of blowing things out of proportion. But of course I didn't have the full facts of each investigation.

One reason I don't put much stock in the CC may not "necessarily [be a] trustworthy or fair arbiter" possibility is that it has to act with reasoning transparency because it is accountable to a public process. Its actions with substance (as opposed to issuing warnings) are reviewable in the UK courts, in proceedings where the charity -- a party with the right knowledge and incentives -- can call them out on dubious findings. The CC may not fear litigation in the same sense that a private entity might, but an agency's budget/resources don't generally go up because it is sued and agencies tend not to seek to create extra work for themselves for the thrill of it.

Moreover, the rationale of non-disclosure due to CC concerns operates at the margin. There are particular things we shouldn't disclose in public because the CC might badly misinterpret those statements is one thing. There is nothing else useful we can disclose because all of those statements pose an unacceptable risk of the CC badly misinterpreting any further detail is another.

While this is not expressing an opinion on your broader question, I think the distinction between individual legal exposure and organizational exposure is relevant here. It would be problematic to avoid certain collective costs of FTX by unfairly foisting them off on unconsenting individuals and organizations. As Will alluded to, it is possible that the costs would be borne by other EAs, not the speaker.

That being said, people could be indemnified. So I think it's plausible to update somewhat the probability that there is some valid reason to fear severe to massive legal exposure to some extent. Or that information would come out in litigation that is more damaging than the inferences to be drawn from silence. (Without inside knowledge, I find that more likely than actual severe liability exposure.)

This would be a good post to disallow voting by very young accounts on. That's not a complete solution, but it's something. I'd also consider disallowing voting on older posts by young accounts for similiar reasons.

Load more