Dylan Matthews has an interesting piece up in Vox, 'How effective altruism let SBF happen'.  I feel very conflicted about it, as I think it contains some criticisms that are importantly correct, but then takes it in a direction I think is importantly mistaken.  I'll be curious to hear others' thoughts.

Here's what I think is most right about it:

There’s still plenty we don’t know, but based on what we do know, I don’t think the problem was earning to give, or billionaire money, or longtermism per se. But the problem does lie in the culture of effective altruism... it is deeply immature and myopic, in a way that enabled Bankman-Fried and Ellison, and it desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk-averse.

Like many youth-led movements, there's a tendency within EA to be skeptical of established institutions and ways of running things. Such skepticism is healthy in moderation, but taken to extremes can lead to things like FTX's apparent total failure of financial oversight and corporate governance. Installing SBF as a corporate "philosopher-king" turns out not to have been great for FTX, in much the same way that we might predict installing a philosopher-king as absolute dictator would not be great for a country.

I'm obviously very pro-philosophy, and think it offers important practical guidance too, but it's not a substitute for robust institutions. So here is where I feel most conflicted about the article. Because I agree we should be wary of philosopher-kings. But that's mostly just because we should be wary of "kings" (or immature dictators) in general.

So I'm not thrilled with a framing that says (as Matthews goes on to say) that "the problem is the dominance of philosophy", because I don't think philosophy tells you to install philosopher-kings. Instead, I'd say, the problem is immaturity, and lack of respect for established institutional guard-rails for good governance (i.e., bureaucracy). What EA needs to learn, IMO, is this missing respect for "established" procedures, and a culture of consulting with more senior advisers who understand how institutions work (and why).

It's important to get this diagnosis right, since there's no reason to think that replacing 30 y/o philosophers with equally young anticapitalist activists (say) would do any good here. What's needed is people with more institutional experience (which will often mean significantly older people), and a sensible division of labour between philosophy and policy, ideas and implementation.

There are parts of the article that sort of point in this direction, but then it spins away and doesn't quite articulate the problem correctly. Or so it seems to me. But again, curious to hear others' thoughts.

Comments35
Sorted by Click to highlight new comments since: Today at 10:08 AM

(EDIT: Added a follow-up comment after reading the original article)

A lot of this discourse feels like it's missing the point to me. FTX was not an EA org. (Alameda was founded mostly with EAs, but then most of them left, in part because of bad governance and lack of ethics!). FTX was not beholden to EAs, and EA and EA orgs didn't have any say in how FTX was governed. EA may have been well placed to blow the whistle on this, and maybe to say things to Sam about this, but it seems very off to say that the governance of EA orgs led to the bad governance of FTX.

(Also, Alameda was founded in ~2018, where the EA scene was very, very different and much less mature (and probably much worse governed). I expect many bad governance decisions are baked in when an org is founded)

I think the correct reference class for the governance of FTX is more like startups, esp crypto startups. I don't see that much of a causal link between, eg, how well OpenPhil or MIRI is governed, and how FTX was governed.

To be clear, I am not arguing that EA orgs are not badly governed, I just think these should be two separate conversations. And, even, that the fact that FTX was badly governed and this led to a disaster, is at best weak evidence that EA orgs governed in a similar way will also lead to a disaster, given how different the work is (it's harder to fuck up when you aren't managing >$10bn of customer funds!) (Though, to be clear, I think that if I discovered an EA org being governed in the same way as FTX it would be a concerning red flag. Just that it should have been a red flag regardless of the FTX blow up!)

Jason
1y38
15
0

He is saying something about SBF's charitable ventures, of which at least FTXFF is reasonably seen as an EA organization based on the staff list:

There’s a fundamental difference between Bankman-Fried’s charitable efforts and august ones like the Rockefeller and Ford foundations: these philanthropies are, fundamentally, professional. They’re well-staffed, normally run institutions. They have HR departments and comms teams and accountants and all the other stuff you have when you’re a grown-up running a grown-up organization.

. . . .

The good news for EAs is that Open Philanthropy, the remaining major EA-aligned funding group, is a much more normal organization. Its form of professionalization is something for the rest of the movement to emulate. [Elsewhere, the author has kind words for GiveWell as an organization.]

I am less than convinced, though, that this particular criticism is fairly directed at EA as a whole as opposed to SBF in particular. And it may be cultural/generational: how many 30-year old billionaires are going to be interested in passively giving away their wealth and letting Rockefeller/Ford-style bureaucrats decide where it goes? I'm a decade older, and that sort of passivity/disengagement feels awfully unappealing to me.

My take on the author's criticism is to repeat the Cynic's Golden Rule: "He who has the gold makes the rules." If SBF really wanted to do it his way, what was EA-as-a-community supposed to do, refuse to take his money? That's a major downside to hits-based donor development. If the EA movement continues to rely so heavily on a few megadonors, the Cynic's Golden Rule will remain in full force whenever the megadonors wish. It is a testament to Moskovitz and Tuna's humility that Open Phil operates as it does, not an expectation EA can demand of would-be megadonors with its non-existent leverage.

Agreed, I made the above comment responding to the forum post, and hadn't read the article - I've added a follow-up comment responding to the article's claims about philanthropic governance.

TLDR: As far as I can tell, the governance of FTXFF was fine and SBF didn't have undue power. Even if he did have undue power, this seems totally unrelated to the multi-billion dollar blow-up, harm to countless people, harm to the EA ecosystem and to EA's reputation.

As I understand it, CEA pseudo-incubated what became Alameda and/or FTX, working closely with SBF to help him get set up. Obviously that doesn't make them responsible for what happened ~5 years later, but nor does it seem reasonable to treat them as unrelated.

That's not my understanding? I'm curious where you heard that from.

Either way, I stand by this - I believe CEA in particular was a big mess back then:

(Also, Alameda was founded in ~2018, where the EA scene was very, very different and much less mature (and probably much worse governed). I expect many bad governance decisions are baked in when an org is founded)

I think that paragraph is quite misguided. "Becoming much more risk averse" is a great way to stop doing anything at all because it's passed through eight layers of garbage. On top of this, it's not like "literally becoming the US federal government" and "not having any accounting or governance at all" are your only two options; this creates a sad false dichotomy. SBF was actively and flagrantly ignoring governance, regulation, and accounting. This is not remotely common for EA orgs.

Like, for the last couple of decades we've been witnesssing over and over again how established, risk-averse institutions fail because they're unable to compete with new, scrappy, risk-tolerant ones (that is, startups).

"Good governance" and bureaucracy are, while correlated, emphatically not the same thing. EA turning into a movement that fundamentally values these over just doing good in the world as effectively as possible will be a colossal failure, because bureaucracy is a slippery slope and the Thing That Happens when you emulate the practices that have been used for centuries is that you end up not being able to do anything. I'd be very sad if this was our final legacy.

The "move fast and break things" model of startups works great for something like software businesses where the failures are harmless and easily forgotten. 

But we're not in the software business. We're in the charity business. And in the charity business, reputation matters in a real, monetary sense. Thanks to FTX, EA has now been associated in pretty much every major newspaper with reckless, harmful, and irresponsible behavior. If you make an EA startup that goes wrong somehow, it's going to be written up in the guardian or the wall street journal, reminding everyone of FTX again. 

And then potential donors are going to read those articles, and know that other people around them are reading said articles as well.  If when people see the words "Effective altruism", the words that come to mind are "fraud and mismanagement", then most donors are going to go somewhere else, where their donations are met with applause rather than raised eyebrows.  This damages everyone associated with EA, no matter how responsible they are for the latest mistake. 

A small amount of bureaucracy and checks and balances is a very small price to pay, if we want to avoid being permanently hobbled by a poor reputation. 

Jason
1y35
10
1

I think an increase in bureaucracy / risk-aversion is inevitable -- and probably necessary -- with increasing size/power/influence after a certain point. The 51% coin flip is great when the wager is $100, not so great when it is all life on Earth. I would submit that part of the answer is to prevent any one organization from getting too massive so that it doesn't get mired down in bureaucracy and ossified. The one thing I will give FTXFF some credit for is the interest in regranting programs.

Right, I wouldn't want to over-correct, but personally "more respect for good governance (even at cost of some increase in bureaucracy)" is the major lesson I've drawn from recent events.  (I expect I'm still  more anti-bureaucratic than most people, but maybe finding a more balanced view than I previously had.)

I'm unsure whether "risk aversion" is the right way to put this, but even if it is I think we probably just want a bit more of it rather than much more.

FTX and Alameda definitely needed more bureaucracy -- as in, doing stuff in a way that doesn't resemble a scene from Idiocracy. https://docs.house.gov/meetings/BA/BA00/20221213/115246/HHRG-117-BA00-Wstate-RayJ-20221213.pdf  "Although our investigation is ongoing and detailed findings will have to await its conclusion, the FTX Group’s collapse appears to stem from the absolute concentration of control in the hands of a very small group of grossly inexperienced and unsophisticated individuals who failed to implement virtually any of the systems or controls that are necessary for a company that is entrusted with other people’s money or assets."

We should distinguish risk aversity, transparency, and bureaucracy. They're obviously related but different concepts. I would argue that transparency is far more important than risk aversity, the more so the less risk averse you are - and unfortunately nontransparency often seems to be correlated with risk-taking. This is sometimes justified on infohazard logic (cf MIRI in general) or some harder-to-pin-down lack of urgency to communicate controversial decisions (cf Wytham Abbey). Increasing transparency necessarily increases bureaucracy, but there are many other ways bureaucracy can increase, so we shouldn't expect it to balloon uncontrollably just because of one upward pressure.

I feel like most core EA organisations would come nowhere near meeting the transparency requirements Givewell place on charities they recommend (though Givewell themselves do impressively well on this score, so it's clearly not impossible for metacharities).

Strongly approve of this comment. 

Established procedures should be questioned. You should definitely use good business practices such as proper accounting, separation of entities with conflicts of interest but you don't want to copy the copious amounts of "established procedures" that amount to getting nothing done through piles of pointless paperwork and administrative bloat and endless committees who talk about nothing. There's lots of teams in EA who get a lot done, specifically because they aren't bogged down in bureaucracy and have a clear focus and mission.

Follow-up to my previous comment: I was responding to the post written here, not the underlying article in the above comment. I've now read the article, and broadly agree with it (esp on the significant over-focus on philosophy and controversial ideas within EA, and how this is harmful), but think that the claims about governance are badly argued and don't hold up, even if the conclusions may be correct

The specific part of the article on the importance of good, professional governance in foundations, is something I feel more confused about, though Matthews' was not arguing that EA being better governed would have prevented FTX collapsing, more that it would have reduced general harm to EA + limited SBF's power.

Like, sure, I think SBF's various foundations being further from him would have been a good idea a priori (and even better in hindsight!). But I don't actually see how the FTX disaster is evidence for this, beyond being evidence of SBF's poor judgement and bad intentions. Like, the issue at hand is not that SBF funded a bunch of pandemic prevention etc! I don't think it's even that he was involved in the foundation's decisions ("foundation staff would sometimes cc SBF on a pitching email"). I think it's that funding was unstable and got pulled out unexpectedly/there's risks of clawbacks, it harmed the reputation of those that received it, that the funding was gained immorally (though evaluating this is confusing to me, since some of their money was gained legitimately, some was from stealing from customers), that it's harmed the reputation of the movement + causes he donated to, and that the giving plausibly allowed him to launder his reputation in a way that gained him more credibility and less suspicion to perpetuate the fraud. None of these are that related to how involved he was in the foundations!

And, finally, I'd argue that the Future Fund was actually much more de-centralised and democratic than eg OpenPhil, given their enormous regranter program. It wasn't quite just giving people money and letting them do whatever they wanted with it, but I'd say that it made giving much more diverse and accounting for many more different perspectives than any other foundation I'm aware of. (I think this comes with many downsides, to be clear, though I'm net a fan of the regranter program. Just that criticising it as SBF being too involved seems silly)

Sharing my reflections on the piece here (not directly addressing this particular post but my own reflections I shared with a friend.)

While I agree with lots of points the author makes and think he raises valuable critiques of EA, I don’t find his arguments related to SBF to be especially compelling.  My run-through of the perceived problems within EA that the author describes and my reactions:

  1. The dominance of philosophy. I personally find parts of long-termism kooky and I'm not strongly compelled by many of its claims, but the Vox author doesn’t explain how this relates to SBF (or his misdeeds)... it feels more like shoehorning a critique of EA in to a piece on SBF? 
  2. Porous boundaries between billionaires and their giving. So yes it sounds like SBF was very directly involved in the philanthropy his funds went toward but I don’t think that caused (much? any?) incremental reputational harm to EA vs. a world where he created the “SBF family foundation” and had other people running the organization. 
  • If I wanted to rescue this argument, maybe I could say SBF’s behavior here is representative of a common trait of his (at FTX and in his charity) – SBF doesn’t even have the dignity to surround himself with yes-men; he insists on doing it all himself! And maybe that’s a red-flag RE cult of personality/genius and/or fraud that EA should have caught on to. 
  • I will say, though, that the FTX Future Fund had a board/team that was fairly star-studded and ran a big re-granting program (i.e., let others make grants with their money). Which is to say I’m not sure how directly involved SBF actually was in the giving. [As an aside, I think it’s fine for billionaires to direct their own giving and am a lot more suspect of non-profit bloat and organizational incentives than the Vox author is.] 
  1. 3. Utilitarianism free of guardrails. I agree a lack of guardrails is a problem, but: 
  • a) On utilitarianism’s own account it seems to me you should recognize that if you commit massive fraud you’ll probably get caught and it will all be worthless (+ cause serious reputational harm to utilitarianism), so then committing the fraud is doing utilitarianism wrong. [I don’t think I’m no-true-Scotsman-ing here?] 
  • b) More importantly… the author doesn't explain how unabashed utilitarianism led to SBF's actions - it's sort of vaguely hand-waving and trying to make a point by association vs. actual causal reasoning / proof, in the same vein as the dominance of philosophy point above? I guess the steelman is: SBF wanted to do the most good at any cost, and genuinely thought the best way to do so was to commit fraud (?) A bit tough for me to swallow. 
  1. 4. Utilitarianism full of hubris. A rare reference to evidence (well, an unconfirmed account, but at least it’s something!) Comparing the St. Petersburg paradox to SBF figuring let’s double-or-nothing our way out of letting Alameda default is an interesting point to make, but SBF's take on this was so wild as to surprise other EA-ers. So it strikes me as a point in favor of “SBF has absurd viewpoints and his actions reflect that” vs. “EA enabled SBF.” Meanwhile the author moves directly from this anecdote to “This is not, I should say, the first time a consequentialist movement has made this kind of error” (emphasis added).  SBF != the movement and I think the consensus EA view is the opposite of SBF’s, so this feels misleading at best.

One EA critique in the piece that resonated with me - and I'm not sure I'd seen put so succinctly elsewhere is: 

“The philosophy-based contrarian culture means participants are incentivized to produce ‘fucking insane and bad’ ideas, which in turn become what many commentators latch to when trying to grasp what’s distinctive about EA." 

While not about SBF, it's a point I don't see us talking about often enough with regard to EA perceptions / reputation and I appreciated the author making it. 

TL;DR: I thought it was an interesting and thought-provoking piece with some good critiques of EA, but the author (or - perhaps more likely - editor who wrote the title / sub-headers) bit off more than they could chew in actually connecting  EA to SBF's actions.

“The philosophy-based contrarian culture means participants are incentivized to produce ‘fucking insane and bad’ ideas, which in turn become what many commentators latch to when trying to grasp what’s distinctive about EA." 

(Was that originally in the article? If so it's been edited now)

Regardless, I've been concerned for years about the perverse incentives for (EA) academics  both to produce weird ideas and to end the discussion of those ideas with 'more research necessary'. While I also disagree with much of the article, I'm glad to finally see that sentiment in print. It needs to be discussed much more IMO.

Just seeing this, but yes it was a quote from the original piece! FWIW I appreciate your use of “weird” vs. the original author’s more colorful language (though no idea if that’s what your pre-edit comment was in reference to)

The key passages (my emphasis):

Longtermism seems weird not because of its critics but because of its proponents: it’s expressed mainly by philosophers, and there are strong incentives in academic philosophy to carry out thought experiments to increasingly bizarre (and thus more interesting) conclusions.

This means that longtermism as a concept has been defined not by run-of-the-mill stuff like donating to nuclear nonproliferation groups, but by the philosophical writings of figures like Nick Bostrom, MacAskill, Greaves, and Nick Beckstead, figures who have risen to prominence in part because of their willingness to expound on extreme ideas.

These are all smart people, but they are philosophers, which means their entire job is to test out theories and frameworks for understanding the world, and try to sort through what those theories and frameworks imply. There are professional incentives to defend surprising or counterintuitive positions, to poke at widely held pieties and components of “common sense morality,” and to develop thought experiments that are memorable and powerful (and because of that, pretty weird).

This isn’t a knock on philosophy; it’s what I studied in college and a field from which I have learned a tremendous amount. It’s good for society to have a space for people to test out strange and surprising concepts. But whatever the boundary-pushing concepts being explored, it’s important not to mistake that exploration for practical decision-making.

[…]

The dominance of academic philosophers in EA, and those philosophers’ increasing attempts to apply these kinds of thought experiments to real life — aided and abetted by the sudden burst of billions into EA, due in large part to figures like Bankman-Fried — has eroded the boundary between this kind of philosophizing and real-world decision-making. Poets, as Percy Shelley wrote, may be the unacknowledged legislators of the world, but EA made the mistake of trying to turn philosophers into the actual legislators of the future. A good start would be more clearly stating that funding priorities, for now, are less “longtermist” in this galaxy-brained Bostrom sense and more about fighting specific existential risks — which is exactly what EA funders are doing in most cases. The philosophers can trod the cosmos, but the funders and advocates should be tethered closer to Earth.

[…]
 

The problem is utilitarianism free from any guardrails …

Sam Bankman-Fried is a hardcore, pure, uncut Benthamite utilitarian. His mother, Barbara Fried, is an influential philosopher known for her arguments that consequentialist moral theories like utilitarianism that focus on the actual results of individual actions are better suited for the difficult real-world trade-offs one faces in a complex society. Her son apparently took that insight very, very seriously.

Effective altruists aren’t all utilitarians, but the core idea of EA — that you should attempt to act in such a way to promote the greatest human and animal happiness and flourishing achievable — is shot through with consequentialist reasoning. The whole project of trying to do the most good you can implies maximizing, and maximizing of “the good,” and that is the literal definition of consequentialism.

It’s not hard to see the problem here: If you’re intent on maximizing the good, you better know what the good is — and that isn’t easy. “​​EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA,” Holden Karnofsky, the co-CEO of Open Philanthropy and a leading figure in the development of effective altruism, wrote in September. “By default, that seems like a recipe for trouble.”

Indeed it was. It looks increasingly likely that Sam Bankman-Fried appears to have engaged in extreme misconduct precisely because he believed in utilitarianism and effective altruism, and that his mostly EA-affiliated colleagues at FTX and Alameda Research went along with the plan for the same reasons.

 

[…]
 

I think taking a high-earning job with the explicit aim of donating the money still makes a lot of sense for most big-money options.

But what SBF did was not just quantitatively but qualitatively different from classic “earn to give.” You can make seven figures a year as a trader in a hedge fund, but unless you manage the whole fund, you probably won’t become a billionaire. Bankman-Fried very much wanted to be a billionaire — so he could have more resources to devote to EA giving, if we take him at his word — and to do that, he set up whole new corporations that never would’ve existed without him. Those corporations then engaged in incredibly risky business practices that never would’ve occurred if he and his team hadn’t entered the field. He was not one-for-one replacing another finance bro who would have used the earnings on sushi and strippers rather than altruistic causes. He was building a whole new financial world, with consequences that would be much grander in scale.

And in building this world, he acted like a vulgar utilitarian. Philosophers like to talk about “biting the bullet”: accepting an unsavory implication of a theory you’ve adopted, and arguing that this implication really isn’t that bad.

 

[…]
 

Bankman-Fried’s error was an extreme hubris that led him to bite bullets he never should have bitten. He famously told economist Tyler Cowen in a podcast interview that if faced with a game where “51 percent [of the time], you double the Earth out somewhere else; 49 percent, it all disappears,” he’d keep playing the game continually.

 

This is known as the St. Petersburg paradox, and it’s a confounding problem in probability theory, because it’s true that playing the game creates more happy human lives in expectation (that is, adjusting for probabilities) than not playing. But if you keep playing, you’ll almost certainly wipe out humankind. It’s an example of where normal rules of rationality seem to break down.

But Bankman-Fried was not interested in playing by the normal rules of rationality. Cowen notes that if Bankman-Fried kept this up, he’d almost certainly wipe out the Earth eventually. Bankman-Fried replied, “Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.”

These are fun dorm room arguments. They should not guide the decision-making of an actual financial company, yet there is some evidence they did. An as-yet-unconfirmed account of an Alameda all-hands meeting describes CEO Caroline Ellison explaining to staff that she and Bankman-Fried faced a choice in early summer 2022: either to let Alameda default after some catastrophic losses, or to raid consumer funds at FTX to bolster Alameda. As the researcher David Dalrymple has noted, this was basically her and Bankman-Fried making a “double or nothing” coin flip: By taking this step, they reasoned they could either save Alameda and FTX or lose both (as wound up happening), rather than keep just FTX, as in a scenario where the consumer funds were not raided.

The stuff about academic incentives makes it sound like there's some "commonsensical" alternative to longtermism out there that philosophers are burying in order to be more "interesting", and that just isn't true.  There's literally no possible way to systematize ethics without ending up somewhere puzzling.

I've written elsewhere about the importance of distinguishing ethical theory and practice. This is a completely standard part of the consequentialist philosophical tradition.  So again, I sort of agree with some of what Matthews says here, except for the philosophy-blaming part of it. 

I also don't see any evidence for the claim of EA philosophers having "eroded the boundary between this kind of philosophizing and real-world decision-making".  That would presumably require a critique of EA funding priorities (esp. by the Future Fund, as directed by Will and Nick), but he instead seems to allow that actual funding decisions have been well-grounded (at least "in most cases"), and merely recommends "more clearly stating" that this is so. That seems to give the game away that his critique here is purely about optics and communications, and not the "real-world decision-making" at all.

Finally, on SBF's lack of guard-rails: yes, he made crazy bad decisions. There is no philosophical view on which he made wise decisions.  He didn't maximize happiness. (Bentham would be rolling in his grave right now, if he had a grave.)  So the worries about maximizing the wrong thing are completely irrelevant here.  The problem was a total lack of practical wisdom or prudence.

As J.S. Mill put it:

People talk as if... at the moment when some man feels tempted to meddle with the property or life of another, he had to begin considering for the first time whether murder and theft are injurious to human happiness. Even then I do not think that he would find the question very puzzling... 

There is no difficulty in proving any ethical standard whatever to work ill, if we suppose universal idiocy to be conjoined with it; but on any hypothesis short of that, mankind must by this time have acquired positive beliefs as to the effects of some actions on their happiness; and the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better.

I also don't see any evidence for the claim of EA philosophers having "eroded the boundary between this kind of philosophizing and real-world decision-making".

Have you visited the 80,000 Hours website recently?

I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.

A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).

(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)

As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.

I’m glad you shared the J.S. Mill quote.

…the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better

EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).

To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.

In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.

My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.

(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)

Distinguish:
(i) philosophically-informed ethical practice, vs
(ii) "erod[ing] the boundary between [fantastical thought experiments] and real-world decision-making"

I think that (i) is straightforwardly good, central to EA, and a key component of what makes EA distinctively good.  You seem to be asserting that (ii) is a common problem within EA, and I'm wondering what the evidence for this is.  I don't see anyone advocating for implementing the repugnant conclusion in real life, for example.

I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business.

I think this is conflating distinct ideas.  The "risky business" is simply real-world decision-making.  There is no sense to the idea that philosophically-informed decision-making is inherently more risky than philosophically ignorant decision-making. [Quite the opposite: it wasn't until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.]

 Philosophers think about tricky edge cases which others tend to ignore, but unless you've some evidence that thinking about the edge cases makes us worse at responding to central cases -- and again, I'm still waiting for evidence of this -- then it seems to me that you're inventing associations where none exist in reality.

EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers.

Of course. The end of the Mill quote is just flagging that traditional social norms are not beyond revision. We may have good grounds for critiquing the anti-gay sexual morality of our ancestors, for example, and so reject such outmoded norms (for everyone, not just ourselves) when we have truly "succeeded in finding better".

there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.

Do you take yourself to be disagreeing with me here?  (Me: "People shouldn't be kings". You: "systematizing philosophers shouldn't be kings!"  You realize that my claim entails yours, right?)  I'm finding a lot of this exchange somewhat frustrating, because we seem to be talking past each other, and in a way where you seem to be implicitly attributing to me views or positions that I've already explicitly disavowed.

My sense is that we probably agree about which concrete things are bad, you perhaps have the false belief that I disagree with you on that, but actually the only disagreement is about whether philosophy tells us to do the things we both agree are bad (I say it doesn't).  But if that doesn't match your sense of the dialectic, maybe you can clarify what it is that you take us to disagree about?

[12/15: Edited to tone down an intemperate sentence.]

There is no sense to the idea that philosophically-informed decision-making is inherently more risky than philosophically ignorant decision-making. [Quite the opposite: it wasn't until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.]

I strongly disagree with this. The key reason is: most of the time, norms that have been exposed to evolutionary selection pressures beat explicit “rational reflection” by individual humans. One of the major mistakes of Enlightenment philosophers was to think it is usually the other way around. These mistakes were plausibly a necessary condition for some of the horrific violence that’s taken place since they started trending.

I often run into philosophy graduates who tell me that relying on intuitive moral judgements about particular cases is “arrogant”. I reply by asking “where do these intuitions come from?” The metaphysical realists say “they are truths of reason, underwritten by the non-natural essence of rationality itself”. The naturalists say: “these intuitions were transmitted to you via culture and genetics, itself subject to aeons of evolutionary pressure”. I side with the naturalists, despite all the best arguments for non-naturalism (to my mind, they’re mostly bad!).

One way to think about the 21st century predicament is that we usually learn via trial and error and selection pressures, but this dynamic in a world with modern technology seems unlikely to go well.

it wasn't until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.

I agree that philosophers, especially Derek Parfit, Nick Bostrom and Tyler Cowen*, have helped get this up the agenda. So too have many economists, astronomers, futurists, etc. Philosophers don’t have a monopoly on identifying what matters in practice—in fact they’re usually pretty bad at this.

Same thing goes if we look at social movements instead of individuals: the anti-nuclear bomb and environmental folks may have done more for getting catastrophic risk up the agenda than effective altruism has so far—especially in terms of generating a widespread culture concern and sense of unease, which certainly warmed up the audience for Bostrom, Parfit, and so on.

Effective altruism movement is only just getting started (hopefully), and it has achieved remarkable successes already. So I do think we’re on track to play a critical role, and we have Bostrom and Parfit and Ord and Sidgwick and Cowen to thank for that—along with many, many others.

*Those who don’t see Tyler Cowen as fundamentally a philosopher—perhaps one of the greats, certainly better than Parfit (with whom he collaborated early on)—are not following carefully.

I’m not going to respond to the “show me the evidence” requests for now because I’m short on time and it’s hard to do this well. Also: I think you and most readers can probably identify a bunch of evidence in favour of these takes if you take a while to look.

I’m sorry to hear you’re finding this frustrating. Personally I’m enjoying our exchange because it’s giving me a reason to clarify and write down a bunch of things I’ve been thinking about for a long time, and I’m interested to hear what you and others make of them.

On Twitter I suggested we arrange a time to call. Would you be up for this? If yes, send me a DM.

There's literally no possible way to systematize ethics without ending up somewhere puzzling.

Central plank of this perspective: systematizing ethics may not be the best idea, but some kinds of folks have a hard time recognising this. Systematising has its merits but if you find ideological mess hard to tolerate, you shouldn't be a king.

Related reading:

I myself am a moral anti-realist, so I don't care much about these debates, though it's perpetually interesting to see debates on morality.

The stuff about academic incentives makes it sound like there's some "commonsensical" alternative to longtermism out there that philosophers are burying in order to be more "interesting", and that just isn't true.  There's literally no possible way to systematize ethics without ending up somewhere puzzling.

This seems importantly strawmanny. Matthews' point (which I strongly agree with, fwiw) is an outside view one - something like 'there are strong financial and reputational incentives for (EA) academics to reach "interesting" conclusions requiring more research' and thus, by what I take as its extension, that whatever the 'true importance' of such concerns is, we should expect it to be systemically overstated by those academics.

It is hardly a counterpoint to this for anyone (especially an academic!) to say 'ah, but those interesting conclusions are of true importance!' - any more than it would be to hear (say) super wealthy people arguing for lower taxation on the grounds that it encourages productivity. The arguments/inside view aren't necessarily wrong, but they just doesn't really interact with the outside view, and finding a good epistemic balance is very hard.

To date, as far as I'm aware, the EA movement has been entirely focused on the inside view arguments, totally ignoring the incentives Matthews observes. As interested as I personally am in utilitarian philosophy, it's very unclear to me whether any of the puzzles you mention have any practical relevance to doing good in the current world, or whether more research would make it any clearer. And in addition to the worries about population ethics, there's a whole bunch of EA-adjacent research programmes that we could completely ignore (and have taken no practical action on to date), which nonetheless get significant funding that might counterfactually have gone to mosquito nets, GCR-prevention, etc:

  • Doomsday argument reasoning
  • Simulation argument reasoning
  • Wild animal suffering
  • Infinitarian ethics
  • Moral uncertainty
  • Cluelessness
  • Research into obscure decision theories*

* (less sure about this one. Maybe MIRI have done something with it behind closed doors, but if so I don't believe they've communicated it)

On top of those examples, Will has openly advocated the importance of 'keeping  EA weird'.

So I think this is an issue that deserves a lot more scrutiny (presumably, ironically, most of which would come from academic EAs).

Distinguish two critiques in this general vicinity:

(1) Longtermism seems weird because its main proponents are philosophers who have professional incentives to make "interesting"/extreme claims regardless of their truth or plausibility.

(2) Academics are likely to "systematically overstate" the importance of their own research, so we shouldn't take their claims about "true importance" at face value.

These are two very different critiques!  Matthews clearly said (1), and that's what I was responding to.  His explanatory claim is demonstrably false.  Your critique (2) seems right to me, though a trivial generalization of the broader claim:

(2*) Everyone is likely to systematically overstate the importance of their own work, so we shouldn't take their claims about the true importance of their work at face value.

I agree that we need to critically evaluate claims that someone's work is important.  There's nothing special about academic work in this respect, though.

I agree that we need to critically evaluate claims that someone's work is important.  There's nothing special about academic work in this respect, though.

Strong disagree with this part. Academics, in the sense of 'people who are paid to do specialised research' are substantially more incentivised to overstate their value than a) people who aren't paid, or b) people who are paid to do more superficial/multi-focus research (eg consultants), and who could therefore pivot easily if it turned out some project they were on was low value.

It sounds like you're talking about researchers outside of academia.  Academics aren't paid directly for their research, and the objective "importance" of our research counts for literally nothing in tenure and promotion decisions, compared to more mundane metrics like how many papers we've published and in what venues, and whether it is deemed suitably impressive (by disciplinary standards, which again have zero connection to objective importance) by senior evaluators within the discipline.

A tenured academic, like a supreme court justice, has a job for life which leaves them far less vulnerable to incentives than almost anyone else.

Why was this downvoted?

Richard: you wrote…

I agree we should be wary of philosopher-kings. But that's mostly just because we should be wary of "kings" (or immature dictators) in general.

Two options:

(1) An immature 30 year old king.

(2) An immature 30 year old king who has chosen to spend the last 10 years in a bubble of consequentialist-flavoured philosophy. [1]

Knowing nothing else, I’d pick (1). You?

Option (2) seems much higher variance. Most likely, this guy is a midwit, or perhaps a high IQ idiot. In either case, electing him king probably means serious trouble ahead.

Given more information I can imagine picking (2). I’d be looking for evidence of practical wisdom, whether that’s taking context seriously, Karnofsky-style “cluster thinking”, Burkean worldliness, or the pragmatism of Fat Tony.

See also: Byrne Hobart and Dwarkesh Patel on hardcore believers & monasteries.

[1] I’m drawing a caricature here, to make the point clearer. I’m not sure how accurate (1) would be as a description of SBF, but it seems roughly right.

If one of the main motivations for effective altruism is to challenge traditional, ineffective ways of doing things, such as bureaucracy, mismanagement, passivity, and established procedures, and to differentiate itself from the world of charity and its connection to the establishment, especially in societies like the UK, then traditional institutional experience by older persons won't be enough. I would argue that a better goal would be identifying what is not working and must be worked on  in relation to learning about and fighting fraud, including developing better tools to do so.

To truly go against the traditional ineffective way of doing things and create differentiation, effective altruism needs to prioritize developing new tools and approaches for addressing issues like charity fraud. This could include using coordination technologies, artificial intelligence, and knowledge networks to identify and review potential frauds, as well as working on building a community of expertsand an academic network who can help develop and implement these solutions. By focusing on innovation and tackling these challenges head-on, effective altruism can continue to set itself apart and make a real impact.

I disagree that things are coupled in this way. You can be innovative and new in some important respects (like cause selection, prioritisation, taking philosophy seriously, etc.) while being boring and traditional in others (good governance, accounting, fraud detection, trust etc.).