The impact of the FTX scandal on EA is starting to hit the news. The coverage in this NY Times article seems fair to me. I also think that the FTX Future Fund leadership decision to jointly resign both was the right thing to do, and comes across that way in the article. Will MacAskill, I think, is continuing to show leadership interfacing with the media - it's a big transition from his book tour not long ago to giving quotes to the press about FTX's implosion.
The article focuses on the impact this has had on EA:
[The collapse of FTX] has also dealt a significant blow to the corner of philanthropy known as effective altruism, a philosophy that advocates applying data and evidence to doing the most good for the many and that is deeply tied to Mr. Bankman-Fried, one of its leading proponents and donors. Now nonprofits are scrambling to replace millions in grant commitments from Mr. Bankman-Fried’s charitable vehicles, and members of the effective altruism community are asking themselves whether they might have helped burnish his reputation...
... For a relatively young movement that was already wrestling over its growth and focus, such a high-profile scandal implicating one of the group’s most famous proponents represents a significant setback.
The article mentions the FTX Future Fund joint resignation, focusing on the grants that will not be able to be honored and what those might have helped.
The article talks about Will MacAskill inspiring SBF to switch his career plans to pursue earning to give, but doesn't try to blame the fraud on utilitarianism or on EA. This is my opinion, but I'm just confused by people's eagerness to blame this on utilitarianism or the EA movement. The common-sense American lens to view these sorts of outcomes is a framework of personal responsibility. If SBF committed fraud, that is indicative of a problem with his personal character, not the moral philosophy he claims to subscribe to.
His connection to the movement in fact predates the vast fortune he won and lost in the cryptocurrency field. Over lunch a decade ago while he was still in college, Mr. Bankman-Fried told Mr. MacAskill, the philosopher, that he wanted to work on animal-welfare issues. Mr. MacAskill suggested the young man could do more good earning large sums of money and donating the bulk of it to good causes instead.
Mr. Bankman-Fried went into finance with the stated intention of making a fortune that he could then give away. In an interview with The New York Times last month about effective altruism, Mr. Bankman-Fried said he planned to give away a vast majority of his fortune in the next 10 to 20 years to effective altruist causes. He did not respond to a request for comment for this article.
Contrary to my expectation, the article was pretty straightforward in describing the global health/longtermism aspects of EA:
Effective altruism focuses on the question of how individuals can do as much good as possible with the money and time available to them. Historically, the community focused on low-cost medical interventions, such as insecticide-treated bed nets to prevent mosquitoes from giving people malaria.
More recently many members of the movement have focused on issues that could have a greater impact on the future, like pandemic prevention and nuclear nonproliferation as well as preventing artificial intelligence from running amok and sending people to distant planets to increase our chances of survival as a species.
Probably the most critical aspect of the article was this:
Benjamin Soskis, senior research associate in the Center on Nonprofits and Philanthropy at the Urban Institute, said that the issues raised by Mr. Bankman-Fried’s reversal of fortune acted as a “distorted fun-house mirror of a lot of the problems with contemporary philanthropy,” in which very young donors control increasingly enormous fortunes.
“They gain legitimation from their status as philanthropists, and there’s a huge amount of incentive to allow them to call the shots and gain prominence as long as the money is flowing,” Mr. Soskis said.
But even this focuses on the same problems of the purchasing of status with philanthropy that we ourselves are wrestling with right now.
Edit: See Aaron Chang's comment for what he sees as the most glaring issue pointed out by the article - "loose norms around board of directors and conflicts of interests between funding orgs and grantees"
I expect that other articles may take a harder look at EA. But I was heartened to see that in this case at least, the author, Nicholas Kulish, seems to be treating EA like what I understand it to be -- a lot of people earnestly trying to figure out how to make the world a better place, and trying to find the most ethical way to navigate a disaster.
As someone who has spent years spreading the message that humans are very prone to self-serving biases (hopefully this is an acceptable paraphrase of some complex ideas!), I've personally been surprised to see your many posts in the forum right now which seem to confidently assert that the outcome was both unforeseeable and unrelated to rationalist ideas (therefore making EAs including yourself purely victims, rather than potentially also causal agents here).
To me, there seems a really plausible path from ideas about the extreme urgency of AI alignment research & the importance of taking "extreme" personal agency (relative to existing social norms) to a group of people taking on extreme risks with a lot of urgency and high personal agency to raise funds for AI alignment research.
I have no connection to any of the people involved and no way to know whether it's what happened in this case, I'm just saying that it seems like a plausible path to what happened here given the publicly available information, and I'm curious whether that's something you've considered.