13832 karmaJoined Nov 2022Working (15+ years)


I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.


Sorted by New
· 1y ago · 1m read


Topic contributions

Closely related to "misappropriation of funds" is "lying to customers." That ends up being a lesser offense than misappropriation, but I'll note it since it is more intrinsically immune to the "intent matters / chaos" line of defense. As one example (quoting the prosecution's sentencing memo):

In FTX’s “safeguarding of assets & digital token management policy,” which was submitted to regulators in The Bahamas and shared with some customers, FTX stated that the company had “a responsibility to ensure that customer assets are appropriately safeguarded and segregated from its own funds.” (GX-340). That policy further stated that “customer assets (both fiat and virtual assets) are segregated from its own assets,” that “all third-party service providers are aware that customer funds do not represent property of [FTX],” and that “all third-party providers are aware that customer assets are held in trust.” (Id.).

Once you go down the path of lying to customers to induce them to give you money (and to a regulator charged with customer protection), any losses that are causally related to those fraudulent statements are on you. The chaos theory just won't gel with an honest belief that customer assets were "appropriately safeguarded and segregated" and "held in trust." I do consider this a lesser offense than misappropriation because the specific intent required is obtaining the money through deceit rather than affirmatively stealing it. But it's still a very serious offense.

Again, under the assumption that your goal is fraud detection. 

It seems like a goal of ~"fraud detection" not further specified may be near the nadir of utility for an investigation.

  • If you go significantly narrower, then how EA managed (or didn't manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
  • If you go significantly broader (cf. Oli's reference to "detecting and propagating information about future adversarial behavior"), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.

My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the "EA immune system" at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., "fraud"), or pathogens writ large (e.g., "future adversarial behavior").

  1. ^

    Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.

Not a crypto-focused platform (e.g., Joe's Crypto Podcast?) No particular reason to know or believe that the company (had / was going to) use something Person said as part of their marketing campaign? If negative to both, it doesn't affect my $5 Manifold bet.

Wang pled guilty to serious crimes including wire fraud, conspiracy to commit securities fraud, and conspiracy to commit commodities fraud

(can't link plea PDF on mobile) [Edit: https://fm.cnbc.com/applications/cnbc.com/resources/editorialfiles/2022/12/21/1671676058536-Gary_Wang_Plea_Agreement.pdf ]

On the nitpick: After reflection, I'd go with a mixed approach (somewhere between even odds and weighted odds of selection). If the point is donor oversight/evaluation/accountability, then I am hesitant to give the grantmakers too much information ex ante on which grants are very likely/unlikely to get the public writeup treatment. You could do some sort of weighted stratified sampling, though.

I think grant size also comes into play on the detail level of the writeup. I don't think most people want more than a paragraph, maximum, on a $2K grant. I'd hope for considerably more on $234K. So the overweighting of small grants relative to their percentage of the dollar-amount pie would be at least somewhat counterbalanced by them getting briefer writeups if selected. So the expected-words-per-dollar figures might be somewhat similar throughout the range of grant sizes.

Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by "AI ethics" more than "AI safety." Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.

(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn't changeable by ordinary legislation.)

I'm late to the discussion, but I'm curious how much of the potential value would be unlocked -- at least for modest size / many grants orgs like EA Funds -- if we got a better writeup for a random ~10 percent of grants (with the selection of the ten percent happening after the grant decisions were made).

If the idea is to see the quality of the median grant, not assess individual grants, then a random sample should work ~as well as writing and polishing for dozens and dozens of grants a year.

Even with decades of development of pharma knowledge, and a complex regulatory system, things still blow up badly (e.g., Vioxx). And the pharma companies usually end up paying through the nose in liability, too. Here, we have a much weaker body of built-up knowledge and much weaker ex ante regulation.

I've never seen a good business case for valuing Twitter at anywhere near the $44B it took to acquire. SBF didn't have nearly that much available, so he'd still be looking at Musk as the majority owner....and it was 100 percent foreseeable that Musk had his own ideological axes to grind. That SBF ran FTX lean seems weak evidence that he could cut 90 percent of Twitter staff without serious difficulties, and the train wreck caused by Musk's cuts suggest that never was realistic.

Finally, the idea that SBF could somehow make Twitter significantly "more scout-mindset and truth-seeking oriented" has never been fleshed out AFAIK. Also, it would be a surprising and suspicious convergence that the way to run Twitter profitably would also have been the way to run it altruistically.

I wasn't suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountants -- but they weren't. And I'm not at all confident on that in any event.) I'm suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources. 

Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move very quickly, and the presence of higher-probability failure modes, it is understandable that investors and lenders wouldn't have prioritized fraud detection. 

In contrast, charitable grantees are much more focused in their concern about fraud; taking money from a solvent, non-fraudulent business that later collapses doesn't raise remotely the same ethical, legal, operational, and reputational concerns. Their potential exposure in that failure mode are likely several times larger than those of the investors/lenders after all non-financial exposures are considered. They are also not on a tight time schedule.

Load more