Hide table of contents

Many have now written about the implications of what is unfolding at FTX, including those considered thought leaders within the community. What is missing is wider acknowledgment that core parts of existing EA methodology make things like fraud defensible, and therefore likely. And unless the community is able to acknowledge this, I would assign a high probability to something similar happening again in the future.

There are two main categories of responses I've seen so far.

  1. SBF made a bad calculation. His math was wrong. His decisions were not positive EV. He should have seen this. Instead he irrationally doubled down on bad bets.
  2. Consequentialists must keep one foot in deontology and virtue ethics as a check and balance. Fraud does not pass muster on those, so don't do it.

I'd argue that the (2) is actually a variation of (1). For, what is the point of identifying as a consequentialists (or having one foot in it), unless we know when to shift our weight from one side to the other? 

What I think the second response is actually saying is: "We are currently not sophisticated enough to make certain calculations. Each additional order of effects adds compounding error and uncertainty . Until we have more accurate models, let other ethical models serve as our guide". i.e. treat deontology as a fallback heuristic when uncertainty is high. 

Though, I'd imagine this should be unsatisfactory and hand wavy for committed consequentialists or utilitarians. The world is highly uncertain! If the output of an ethical model when faced with uncertainty is to defer to other models, then it's not a particularly useful model. And I think those making the second response recognize this. The post We must be very clear: fraud in the service of effective altruism is unacceptable ends with the following:

Additionally, if you're familiar with decision theory, you'll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon. In my opinion, I think such strategies of credible pre-commitments are extremely important for cooperation and coordination.

Furthermore, I will point out, if FTX did engage in fraud here, it was clearly in fact not a good idea in this case: I think the lasting consequences to EA—and the damage caused by FTX to all of their customers and employees—will likely outweigh the altruistic funding already provided by FTX to effective causes.

i.e. SBF made a bad calculation. And that costs will likely outweigh the benefit. And so what we're currently left with as the primary responses of this community, are variations of "What SBF did was wrong, because his math was wrong". 

The implication is that the math for committing fraud almost never works out. But, it might. And at some point, at some odds, in some model, fraud will output a positive EV. Much of what SBF has spoken about in the past suggests that is what happened with him. And the lower probability outcome (fraud detected) from his model ended up hitting. Whether or not one believes this to be the case with SBF, there is a reasonable hypothetical case to be made for someone to do something similar using these principles.

Further, there is a chance that at some point in the future, people will look back at some of the positive changes and reforms that come out of this episode, and think - "You know, maybe it was a good thing this happened!" 

This is unlikely to be a healthy approach or conclusion for this community. And so, here are some thoughts on things to consider and/or reform. 

A Possible Way Forward

High Certainty Altruism: "Effective Altruism" till recently was primarily about things like funding for distributing bednets, cash transfers, RCTs etc. These interventions have had a large positive impact on the world and saved many lives. We can say this because evidence shows this to be the case with high certainty. And I'm confident that this community will play an important role in finding many more of these interventions.

High Certainty Altruism (HCA) is a more responsible and honest description of the methods involved in this type of giving. We often push back on this, but by labelling the giving done by this community as "effective", one implies that other approaches are "ineffective" - further implying that the very same ethical models one is treating as essential fallback heuristics are lacking in some way.

Also, many of the experimental actions, discussions and research done by those in the EA community are often conflated with the more run-of-the-mill HCA stuff - both internally and externally. Decoupling the two will be important going forward.

Abandon Expected Value Calculations: Or, continue to assign numbers, but skip the math. Unlike money, most things we care about in the world are not fungible. People are not fungible. Aspirations are not fungible. Gains in one place do not neatly cancel out or substitute losses in the other. This does not mean we abandon the rigor and clarity that comes with assigning numbers and probabilities when faced with complex tradeoffs. Nor do we ignore them as information or inputs into our decision making. 

But the act of adding or subtracting these numbers for EV or net benefit calculations, to arrive at a neat answer or conclusion, is I think the source of many problematic decisions. It aids an aura of objectivity which is undeserved, and strips accountability. I plan to write more about this later, but here is an example to demonstrate the two different approaches:

With EV:
Policy A: 100k Net Jobs Added
Policy B: 80k Net Jobs Added

Without EV:
Policy A: 120k Jobs Added, 20k Jobs Lost
Policy B: 85k Jobs Added, 5k Jobs Lost

Which is the better policy? Unlike "With EV", the "Without EV" approach presents numbers and information as inputs, but does not editorialize or imply what the "right" decision is. There is responsibility and accountability that comes with not treating these numbers as fungible. 

It's also a good check on hubris - one does not hide the negative impacts behind a positive sounding aggregate number. Some jobs are going to be lost. Whatever a person decides, that person will be at the receiving end of blame. The processes and ideas that can help a person navigate these decisions, and deal with the consequences and responsibilities, are where this community can play a healthy role.

Epistemology, not Ontology: One of the most fulfilling things about this community is its perpetual curiosity and quest for information. Collecting evidence, asking questions and discussing all the different ways in which people look at the world is where this community really shines. I'd like to see a shift of focus to becoming a source for all these different inputs into ethical decision making; and less on generating neat outputs of what is right vs. wrong. The trolley problem does not have a right answer - I'd feel horrible choosing either option. But there are still many things to be learned, and questions we can ask about why people might choose one option vs. the other, and how they might deal with the consequences of their tough decisions.

1

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 1:54 AM

Abandon Expected Value Calculations: Or, continue to assign numbers, but skip the math. Unlike money, most things we care about in the world are not fungible. People are not fungible. Aspirations are not fungible. Gains in one place do not neatly cancel out or substitute losses in the other.

 

Unfortunately my friend the need to measure things cannot be avoided as fundamentally we should streamline information so that decision makers can assess the viability of a certain decision. Hard calculations & estimates are still required at the most responsible way in order to ensure that funds are distributed and utilized properly or eliminate the possibilities of errors and fraud (the reason why most of us are concerned as of the moment).

Yes of course - as I wrote:

This does not mean we abandon the rigor and clarity that comes with assigning numbers and probabilities when faced with complex tradeoffs. Nor do we ignore them as information or inputs into our decision making.

The issue arises when those numbers are used in the aggregate, for non-substitutable things. For example, when faced with the two job policies I mentioned as an example, is the policy with greater net jobs objectively the "most responsible way" to go?

Okay point taken, better join those two paragraphs so it will not be separately read as stand alone ideas...But to your point, the devil is always in the details and hypothetical scenarios are very difficult to analyze. Probably add a better example to your argument as I cannot understand why there was jobs lost in the first place.

Curated and popular this week
Relevant opportunities