One of the largest cryptocurrency exchanges, FTX, recently imploded after apparently transferring customer funds to cover losses at their affiliated hedge fund. Matt Levine has good coverage, especially his recent post on their balance sheet. Normally a crypto exchange going bust isn't something I'd pay that much attention to, aside from sympathy for its customers, but its Future Fund was one of the largest funders in effective altruism (EA).

One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.

The opposite reaction, which I've also seen in several places, mostly within EA, is more like, "how could we have caught this when serious insitutional investors with hundreds of millions of dollars on the line missed it?" FTX had raised about $2B in external funding, including ~$200M from Sequoia, ~$100M from SoftBank, and ~$100M from the Ontario Teacher's Pension Plan. I think this argument does have some truth in it: this is part of why I'm ok dismissing the "obvious fraud" view of the previous paragraph. But I also think this lets EA off too easily.

The issue is, we had a lot more on the line than their investors did. Their worst case was that their investments would go to zero and they would have mild public embarrassment at having funded something that turned out so poorly. A strategy of making a lot of risky bets can do well, especially if spending more time investigating each opportunity trades off against making more investments or means that they sometimes lose the best opportunities to competitor funds. Half of their investments could fail and they could still come out ahead if the other half did well enough. Sequoia wrote after, "We are in the business of taking risk. Some investments will surprise to the upside, and some will surprise to the downside."

This was not our situation:

  • The money FTX planned to donate represented a far greater portion of the EA "portfolio" than FTX did for these institutional investors, The FTX Future Fund was probably the biggest source of EA funding after Open Philanthropy, and was ramping up very quickly.

  • This bankruptcy means that many organizations now suddenly have much less money than they expected: the FTX Future Fund's committed grants won't be paid out, and the moral and legal status of past grants is unclear. [1] Institutional investors were not relying on the continued healthy operation of FTX or any other single company they invested in, and were thinking of the venture capital segment of their portfolios as a long-term investment.

  • FTX and their affiliated hedge fund, Alameda Research, were founded and run by people from the effective altruism community with the explicit goal of earning money to donate. Their founder, Sam Bankman-Fried, was profiled by 80,000 Hours and listed on their homepage as an example earning to give, back when he was a first-year trader at Jane Street, and he was later on the board of the Centre for Effective Altruism's US branch. FTX, and Bankman-Fried in particular, represented in part an investment of reputation, and unlike typical financial investments reputational investments can go negative.

These other investors did have much more experience evaluating large startups than most EAs, but we have people in the community who do this kind of evaluation professionally, and it would also have been possible to hire an outside group. I suspect the main reason this didn't happen is that EA isn't a unified whole, it's a collection of individuals and organizations with similar goals and ways of thinking about the world. There are likely many things that would be worth it for "EA" to do that don't happen because it's not clear who would do them or even whether someone is already quietly doing the work. I hope building a better process for identifying and coordinating on this sort of work is one of the things that can come out of this collapse.

While at this stage it's still not clear to me whether more vetting would have prevented this abuse of customer funds (perhaps by leading to better governance at FTX or more robust separation between FTX and Alameda) or led EAs to be more cautious with FTX funding, I don't think it's enough to say that since Sequoia etc. missed it we most likely would have as well.


[1] Disclosure: my work may have been funded in part by FTX. I've asked for my pay to be put on hold if it would be coming from an FTX grant.

Comment via: facebook, mastodon

192

0
0

Reactions

0
0

More posts like this

Comments41
Sorted by Click to highlight new comments since:

In addition to having a lot more on the line, other reasons to expect better of ourselves:

  • EA had (at least potential) access to a lot of information that investors may not have, in particular about Alameda's early exodus in 2018.
  • EA had much more time to investigate and vet SBF—there's typically a very large premium for investors to move fast during fundraising, to minimize distraction for the CEO/team.

Because of the second point, many professional investors do surprisingly little vetting. For example, SoftBank is pretty widely reputed to be "dumb money;" IIRC they shook hands on huge investments in Uber and WeWork on the basis of a single meeting, and their flagship Vision Fund lost 8% (~$8b) this past quarter alone. I don't know about OTPP but I imagine they could be similarly diligence-light given their relatively short history as a venture investor. Sequoia is less famously dumb than those two, but still may not have done much vetting if FTX was perceived to be a "hot" deal with lots of time pressure.

Very much agree. Some EAs knew SBF for almost a decade and plausibly interacted with him for hundreds of hours (including in non-professional settings which are usually more revealing of someone's character and personality).

The obvious default conclusion here is that there is nothing inherent in his personality that made him more likely to do this, compared to other EAs.

Other EAs haven't gambled billions of customer funds simply because they didn't have billions of customer funds. If they had been in SBF's situation they may have fallen to the temptation too.

Needless to say, I think what SBF did was unquestionably wrong and condemn it. I'm simply also pessimistic enough to think that I myself, and other regularly-adjusted-humans around me, would also fall to such temptations. Somehow, I really doubt that 100% of the people condemning SBF would have resisted the temptation if they were put into the same situation.

[comment deleted]1
0
0

Strongly agree with these points and think the first is what makes the overwhelming difference on why EA should have done better. Multiple people allege (both publicly on the forum and people who have told me in confidence)  to have told EA leadership that SBF was doing thinks that strongly break with EA values since the Alameda situation of 2018.

 This doesn't imply we should know about any particular illegal activity SBF might have been undertaking, but I would expect SBF to not have been so promoted throughout the past couple of years. This is especially surprising given my perception of EA's high risk-aversion to doing marketing of the movement over the years.

Even if we put everything else to the side, I was shocked to find out that SBF had a $40m property in the Bahamas and felt very naive to have believed the much more popular stories that he merely drove and old car and slept on a beanbag in front of his desk. Surely many other EAs who had visited the Bahamas had been to his penthouse suite or had caught wind of it and could have corrected the mistaken image somehow or posed it as a risk to the movement.

Jeff - this is a useful perspective, and I agree with some of it, but I think it's still loading a bit too much guilt onto EA people and organizations for being duped and betrayed by a major donor. 

EAs might have put a little bit too much epistemic trust in subject matter experts regarding SBF and FTX -- but how can we do otherwise, practically speaking?

In this case, I think there was a tacit, probably largely unconscious trust that if major VCs, investors, politicians, and journalists trusted SBF, then we can probably trust him too. This was not just a matter of large VC firms vetting SBF and giving him their seal of approval through massive investments (flawed and rushed though their vetting may have been.) 

It's also a matter of ordinary crypto investors, influencers, and journalists largely (though not uniformly) thinking FTX was OK, and trusting him with billions of dollars of their money, in an industry that is actually quite skeptical a lot of the time. And major politicians, political parties, and PACs who accepted millions in donations trusting that SBD's reputation would not suffer such a colossal downturn that they would be implicated. And journalists from leading national publications doing their own forms of due diligence and investigative journalism on their interview subjects. 

So, we have a collective failure of at least four industries outside EA -- venture capital, crypto experts, political fund-raisers, and mainstream journalists -- missing most of the alleged, post-hoc, red flags about SBF. The main difference between EA and those other four industries is that I see us doing a lot of healthy, open-minded, constructive, critical dialogue about what we could have done differently, and I don't see the other four industries doing much -- or any -- of that.

Let's consider an analogous situation in cause-area science rather than donor finance. Suppose EAs read some expert scientific literature about a potential cause area -- whether global catastrophic biological risks, nuclear containment, deworming efficacy, direct cash transfers, geoengineering, or any other domain. Suppose we convince each other, and donors, to spend billions on a particular cause area based on expert consensus about what will work to reduce suffering or risk. And then suppose that some of the key research that we used to recommend that cause area turns out to have been based on false data fabricated by a powerful sociopathic scientist and their lab -- but the data were published in major journals, peer-reviewed by leading scientists, cited by hundreds of other experts, informed public policy, etc. 

How much culpability would EA have in that situation? Should we have done our own peer review of the key evidence in the cause area? Should we have asked the key science labs for their original data? Should we have hired subject matter experts to do some forensic analysis of the peer-reviewed papers? That seems impractical. At a certain point, we just have to trust the peer-review process -- whether in science, or in finance, politics, and journalism -- with the grim understanding that we will sometimes be fooled and betrayed. 

The major disanalogy here would be if the key sociopathic scientist who faked the data was personally known to the leaders of a movement for many years, and was directly involved in the community. But even there, I don't think we should be too self-castigating. I have known several behavioral scientists more-or-less well, over the years, who turned out to be very bad actors who faked data, but who were widely trusted in their fields, who didn't raise any big red flags, and who left all of their colleagues scratching their heads afterwards, asking 'How on Earth did I miss the fact that this was a really shady researcher?' The answer usually turns out to be, the disgraced researcher allocated most of the time that other researchers would have put into collecting real data, into covering their tracks and duping their colleagues, and they were just very good at being deceptive and manipulative. 

Science relies on trust, so it's relatively vulnerable to intentionally bad, deceptive actors. EA also relies on trust in subject matter experts, so we're also relatively vulnerable to bad actors. But unless we want to replicate every due diligence process, every vetting process, every political 'opposition research' process, every peer review process, every investigative journalism process, then we will remain vulnerable to the occasional error -- and sometimes those errors will be very big and very harmful.

That might just be the price of admission when trying to do evidence-based good using finances from donors. 

Of course, there are lots of ways we could do better in the future, especially in doing somewhat deeper dives into key donors, the integrity of key organizations and leaders, and the epistemics around key cause areas. I'm just cautioning against over-correcting in the direction of distrust and paranoia.

Epistemic status of this comment: I'm slightly steel-manning a potential counter-argument against Jeff's original post, and I think I'm mostly right, but I could easily be persuaded otherwise.

What's the evidence people actually went through the virtuous described process of thinking about whether to trust SBF and checking all these independent sources? (Science analogy is an interesting one though I agree.) 

I don't know.  Others know much more than I do. 

I wasn't claiming there was a systematic, formalized process of checking all these independent sources in an exhaustive, detailed, skeptical way.

I was only suggesting that from the viewpoint of most EAs, 'there was a tacit, probably largely unconscious trust that if major VCs, investors, politicians, and journalists trusted SBF, then we can probably trust him too'....

"At a certain point, we just have to trust the peer-review process"

Coming here late, found it an interesting comment overall, but just thought I'd say something re interpreting the peer reviewed literature as an academic, as I think people often misunderstand what peer review does. It's pretty weak and you don't just trust what comes out! Instead, look for consistent results being produced by at least a few independent groups, without there being contradictory research (researchers will rarely publish replications of results, but if a set of results don't corroborate a single plausible theoretical picture, then something is iffy). (Note it can happen for whole communities of researchers to go down the wrong path, though - it's just less likely than for an individual study.) Also, talk to people in the field about it! So there are fairly low cost ways to make better judgements than believing what one researcher tells you. The scientific fraud cases that I know involved results from just one researcher or group, and sensible people would have had a fair degree of scepticism without future corroboration. Just in case anyone reading this is ever in the position of deciding whether to allocate significant funding based on published research.

"Science relies on trust, so it's relatively vulnerable to intentionally bad, deceptive actors"

I don't think science does rely on trust particularly highly, as you can have research groups corroborating or casting doubt on others' research. "Relatively" compared to what? I don't see why it would be more vulnerable to be actors than most other things humans do.

To weigh in on a personal note: 

  1. I had a reasonably close friend who, seven years ago, was unmasked as a compulsive liar and whose life turned out to be a house of cards. Watching it unravel was both upsetting and enchanting. Today, they are the CEO of a well-funded startup.
  2. Around the same time, my advisor was involved in a very public academic scandal when it turned out that his co-author had 100% fabricated their study's data. Once that came to light, people started digging, and it turned out that the co-author had been making stuff up for a long time undetected. He only got caught when two other researchers tried to replicate his methods -- an enormous lift -- and couldn't. 
  3. I think it was Agnes Callard who wrote that it's better to trust and to be taken advantage of sometimes than to be distrustful and close yourself off. In my personal life, I try to live by this.  It's too easy to let scar tissue accumulate and find yourself immobilized. 
  4. But when large sums of money -- particularly other people's money -- are involved, we have to hold ourselves to higher standards, even/especially when it causes social friction.
  5. What's the opposite of "isolated demands for rigor," motivated reasoning? Sam came in with big money and big talk and I get why we fell for it -- I myself filled out the form to "come hang out in the Bahamas" with them earlier this year.
  6. I agree very much with Jeff's point that more clarity about who's doing what in the movement would be a great thing to build from this.

It's not just investors. A number of FTX employees had much of their life savings "invested" in and on FTX. They had a huge (to them) financial risk on the line, plus access. The legal and compliance people seem not to have known, as evidenced by their mass resignations. They had huge (to them) reputational risk on the line, plus access. As far as we know, none of the invested employees nor the professionals detected the high risk of fraud.

Just a quick addition that I think there's been too much focus on VCs in these discussions. FTX was initially aimed as a platform for professional crypto traders. If FTX went down, these traders using the platform stood to lose a large fraction of their capital, and if they'd taken external money, to go out of business. So I think they did have very large incentives to understand the downside risks (unlike VCs who are mainly concerned with potential upside).

and unlike typical financial investments reputational investments can go negative.

This is true; but the way more significant contributing factor of this sort is that impact on the world can go negative. We had more at stake because we think that defrauding customers is a huge harm to the world, and the purpose of investing in SBF is to create positive impact on the world. The market for FTX/FTT doesn't price in negative impact on humankind.

There's some discussion of whether implementing impact certificate markets—which might be more of an academic curiosity at this point—would have similar problems, where translating a utility function that goes negative (impact on the world) into one with a lower bound of zero (financial) would incentivize negative projects. As far as I can tell, cash prizes for positive impact projects have the same fundamental problem, though I'd love to be corrected here if I'm missing something. One way around this would be requiring a form of insurance (prior to entering impact markets, prize competitions,  earning-to-give careers, AI-capabilities-research-in-the-interest-of-alignment, etc), though I think there are a lot of both practical and and incentive-flavored barriers to these emerging any time soon.

I'm curious whether there are other areas in EA where we systematically miss the necessity of oversight for protection against negative outcomes that we care about, where markets / regulatory and legal systems / social norms will be predictably insufficient watchdogs. 

edited

Seems relevant to think about this in relation to our other big donors. ie is there someone who's job it is to ask hard questions of Moskovitz and Tuna's financials? 

Theres a good deal more transparency there -- the Good Ventures Foundation has filed 990-PFs as required by law, and you can see both how much has been irrevocably devoted to charity and what the underlying assets are. Of course, tax forms are not ground truth, but asking questions is not going to get someone who has hypothetically perjured themselves to IRS to 'fess up.

More generally, although I think the overall idea is good, one has to be really careful that the donors feel respected. The biggest risk in most cases is that the donors will pick up their money and either donate it elsewhere or (if not yet in a 501c3) spend it themselves.

They are human, and e.g. knowing the movement is funding an FTE devoted to investigating and planning for the possibility that they are a total phony would unsettle most people and make them feel less connected. Even more so since there would only be a small # of people who would justify costly vetting.

And especially if the FTE is employed by a org they financially supported! Which gets back to a meta point that EA needs an org that is funded and controlled by rank-and-file EAs including "small" donors . . .

I think that we probably spent too much time ensuring SBF felt respected and too little ensuring that he and Caroline weren't engaged in large scale fraud.

Very likely true! My point was that there are many types of donor risks, and we have to be careful about managing the entire basket of them. Means of mitigating certain types of donor risks may increase other risks. Thus, we should consider the risk of fraud/insolvency based on readily-available information against other donor-related risks before deciding how much to poke the bear. 

Here, SBF's wealth was poorly understood, was in a small private company, was in an industry with lots of fraud and very poor regulation, was new money, and was not "in the bag" in terms of being committed to charitable causes. The next megadonor might be someone whose wealth came from shares in a publicly-traded "boring" company in a more mature and properly-regulated field. Their donor risk profile would be much different than SBF's.

There’s a big difference between “we should have seen this coming” and “we should have taken steps to mitigate possible disaster.”

The fact that EA had more to lose in some ways from the FTX bust in no way provided information to predict that bust. “If professional investors missed this…” holds true whether EA had $1 or $1 billion on the line.

But there are steps EA could have taken to mitigate the fallout even without having been able to predict fraud.

For example, we could have invested in legal clarity and contingency plans in case of FTX going bankrupt or being revealed as fraudulent. It’s like wearing your seatbelt. Nobody wears a seatbelt because they predict they’re going to get in a crash. They do it because it’s a cheap and potent form of risk mitigation, without making any effort to predict the outcome on their specific trip. EA risk management should look like installing seatbelts for the movement.

"Legal less murkiness" anyway!

I strong upvoted the comment, but I think the decentralization of the community is a real challenge as far as communicating legal matters (and to a lesser extent, certain types of contingency plans) either before or after a disaster.

Before: Suppose a leading organization in EA had commissioned a legal opinion on the effects of a major donor's insolvency. Although the opinion is generic, it isn't hard to figure out who is in mind. What does the organization do with the opinion? If you think the optics are bad now, imagine a world in which it got out that a major EA organization had commissioned an opinion in advance of the fraud's discovery, shared it with FTXFF grantees, and discussed in advance what to do if there was fraud. Also, many of the actions one could take or prepare in advance to mitigate damage from a fraud or insolvency would not qualify as "cheap and potent." 

After:  Suppose a leading organization in EA now commissions a legal opinion on the legal effects of the FTX insolvency and potential strategies. That's going to be protected by attorney-client privilege. But sharing that with other organizations and grantees may be difficult to do without waiving privilege or at least jumping through some hoops. 

If we had commissioned a report on contingency plans for FTX fraud (w/o predicting fraud, just saying what we’d do to mitigate the fallout if it happened) I think that would make us look prudent? Because it would have been prudent.

I’m no financial risk manager, but the point of having one is to figure out the set of things that are cost effective. I will bet a buttcheek that the number of common sense cost effective risk mitigation steps we could have taken is greater than zero.

I agree that we should have taken this risk more seriously. I think "is SBF doing fraud" was worth at least 1 person thinking about it full time.

I am personally interested in this job for the next billionaire donor!

I don't know if you intend to imply that because it mattered more to us that we ought to have spotted it, I disagree. People holding FTT (FTX's coin) or shorting it had huge amounts on the line. And they understood crypto. I guess I expected us to beat them to it, but that we didn't doesn't downgrade our general competence much.

Does the EA community as a whole have the right to request SBF to show us the financials of FTX/Alameda and let us examine it though?

Grant receivers going at the grant provider, asking him to show accountability?

The EA community as a whole doesn't have any rights as it's not a legal person.  Grant receivers don't have legal rights to get any information on grant providers, but it is common to do some kind of due diligence, the only thing they can do is not accept grants if they don't get information (and this happens sometimes). The same is true for investors in private companies, by the way, there is no obligation to provide information, but they of course they just won't invest if they don't get the information they need.

If you are talking about moral rights instead of legal rights this is of course a very different thing and highly debatable.

I've only had time to skim the comments, so apologies if this is elsewhere, but isn't the issue not so much that EA didn't have the skills to evaluate FTX; it's that it didn't have access to the information? 

The difference between EA and Sequoia capital is that Sequoia Capital could demand almost any financial information they wanted and FTX had a strong incentive to comply.

Let's consider another example: let's say you had suspicions that Open Phil or Longview were engaged in fraud. What would your next step be? If you sent an email demanding a lot of financial documents, I think they'd be unlikely to reply/send them, and if they did, you couldn't verify their authenticity.

So I take the point that we need to be more vigilant in the future, most especially about putting people on a pedestal and making them symbolic of EA; and it does sound like a lot of people knew SBF was unpleasant to work with and either weren't heard or didn't say anything. But I don't see how an amorphous movement of individuals could ever have subjected FTX to more scrutiny than someone invested $200m.

There are likely many things that would be worth it for "EA" to do that don't happen because it's not clear who would do them or even whether someone is already quietly doing the work. I hope building a better process for identifying and coordinating on this sort of work is one of the things that can come out of this collapse.

Agree. We should be better at funding and supporting these “public goods” that cut across multiple groups and individuals.

[anonymous]11
6
6

[deleted]

Are you talking about the board of the FTX Future Fund or about the team working there? Because the board solely consisted of FTX/Almeda people

[anonymous]1
0
0

[deleted]

everybody on this post

Matching names to https://ftxfuturefund.org/about the people resigning seem to have been the employees and advisors of the fund.

the board was all FTX/Alameda people is a big red flag

Is that a red flag? This was a foundation that existed to distribute FTX/Alameda money.

I think it was at least an amber flag, and incompatible with "securing the bag for EA".

Responded on your post; the money OpenPhil is distributing seems to be in the same situation

Although as of 2018 -- the IRS is rather slow in posting 990s, especially after the pandemic started -- there was $2.1B in the Good Ventures Foundation distributed across a range of stocks. So some of the risks with FTX are there, and others are not. The board could, I suppose, start devoting its resources to building shrines to Dolly Parton. It could not divert the funds from charitable use. It is only potentially vulnerable to clawbacks if Dustin and Cari were so subject at the time of transfer -- a risk that is several orders of magnitude lower than for a crypto company . .  or most other kinds of company for that matter.

As an aside, if there is one low-hanging transparency fruit here -- mid-size+ charities absolutely should be posting their 990s (or equivalent forms)  promptly after they are being filed. We should not have to wait years for the IRS to cough up the data in an accessible form, or march to the charity's headquarters for the in-person inspection that is guaranteed by law. 

Thanks for pointing that out. Yes, looks like more could be done to secure OpenPhil's bag.

This is spot on. The Sequoia letter makes the correct point that FTX accounted for something like <1% of the fund vehicles used to invest in FTX. Most startups fail and while the manner matters - I’m sure they are having some tough LP conversations - this does not remotely represent an existential risk to their business. This will be forgotten in due time.

EA had unique insight into SBF the person outside of FTX pitch meetings. As others mentioned, EA also had a view from the Alameda breakup. I have no idea if that information would/could have been helpful, it is meaningfully distinct from the information VCs had (although, lol at not getting the balance sheet for an investment in a financial services company…).

One more point: the reputational damage to EA here is going to be orders of magnitude worse than for their investors. How have potential donors/employees/partners changed their view on Insight Venture Partners as a result of FTX? No one cares. Has this meaningfully hurt EA’s public reputation? Undoubtedly especially given recent fanfare around WWOTF. I am worried about this and am not sure what to do about it.

Thanks for this take. This is one of the best takes on the issue I've read on here. I particularly agree with the point that 'investing' into FTX has significantly more risk for the EA community than for the institutional investors that did invest. One small thing to add on this point is that - while there were a group of investors that did invest in FTX - there were likely also various potential investors that decided not to invest after some due diligence, we just don't know of them.

Good piece: https://www.epsilontheory.com/the-macguffin-part-2-the-story-arc-of-sbf-and-ftx/#.Y3ehamoqN1A.twitter

One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.

I'm not sure there's a big difference between 'cause for scrutiny' and 'insufficient warning flags' here. It seems like there were many causes for scrutiny, all of which were basically ignored by the people who could have acted on them. Had those influential people been more diligent, presumably these concerns would have loomed larger, and this might have been foreseen.

The issue is, we had a lot more on the line than their investors did. 

Big +1 

FTX is like Enron exploding in the center of EA. 

One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.

 

To make money, you not only have to be right, but be right at the right time. Imagine you predicted  the COVID pandemic in 2018 and shorted the market starting in 2018. By 2020 you would be broke and have no more cash.

On the other hand, EA is not trying to make money. So, the EA community doesn't care about the timing as much as a trader does. EA cares about preparation. If we know that the COVID pandemic is going to happen in 2018, we start preparing in 2018, and when it does happen, in 2020, we are prepared. 

Thus, for the EA community, what was really more salient were articles such as this piece  by Paul Krugman:

stablecoins...resemble 19th-century banks,...when paper currency was issued by largely unregulated private institutions. Many of these banks failed, in some cases due to fraud but mostly due to bad investments.

[this is a repost from a comment elsewhere]

Curated and popular this week
Relevant opportunities