WM

William_MacAskill

8263 karmaJoined

Comments
201

I'm obviously sad that you're moving on, but I trust your judgment that it's the right decision. I've deeply appreciated your hard work on GWWC over these last years - it's both a hugely impactful project from an impartial point of view and, from my own partial point of view, one that I care very strongly about. I think you're a hard-working, morally motivated and high-integrity person and it's always been very reassuring to me to have you at the helm. Under your leadership you transformed the organisation. So: thank you!

I really hope your next step helps you flourish and continues to give you opportunities to make the world better. 

Like Buck and Toby, I think this is a great piece of legislation and think that it's well worth the time to send a letter to Governor Newsom. I'd love to see the community rallying together and helping to make this bill a reality!
 

On talking about this publicly

A number of people have asked why there hasn’t been more communication around FTX. I’ll explain my own case here; I’m not speaking for others. The upshot is that, honestly, I still feel pretty clueless about what would have been the right decisions, in terms of communications, from both me and from others, including EV, over the course of the last year and a half. I do, strongly, feel like I misjudged how long everything would take, and I really wish I’d gotten myself into the mode of “this will all take years.” 

Shortly after the collapse, I drafted a blog post and responses to comments on the Forum. I was also getting a lot of media requests, and I was somewhat sympathetic to the idea of doing podcasts about the collapse — defending EA in the face of the criticism it was getting. My personal legal advice was very opposed to speaking publicly, for reasons I didn’t wholly understand; the reasons were based on a general principle rather than anything to do with me, as they’ve seen a lot of people talk publicly about ongoing cases and it’s gone badly for them, in a variety of ways. (As I’ve learned more, I’ve come to see that this view has a lot of merit to it). I can’t remember EV’s view, though in general it was extremely cautious about communication at that time. I also got mixed comments on whether my Forum posts were even helpful; I haven’t re-read them recently, but I was in a pretty bad headspace at the time. Advisors said that by January things would be clearer. That didn’t seem like that long to wait, and I felt very aware of how little I knew.

The “time at which it’s ok to speak”, according to my advisors, kept getting pushed back. But by March I felt comfortable, personally, about speaking publicly. I had a blog post ready to go, but by this point the Mintz investigation (that is, the investigation that EV had commissioned) had gotten going. Mintz were very opposed to me speaking publicly. I think they said something like that my draft was right on the line where they’d consider resigning from running the investigation if I posted it. They thought the integrity of the investigation would be compromised if I posted, because my public statements might have tainted other witnesses in the investigation, or had a bearing on what they said to the investigators. EV generally wanted to follow Mintz’s view on this, but couldn’t share legal advice with me, so it was hard for me to develop my own sense of the costs and benefits of communicating. 

By December, the Mintz report was fully finished and the bankruptcy settlement was completed. I was travelling (vacation and work) over December and January, and aimed to record podcasts on FTX in February. That got delayed by a month because of Sam Harris’s schedule, so they got recorded in March. 

It’s still the case that talking about this feels like walking through a minefield. There’s still a real risk of causing unjustified and unfair lawsuits against me or other people or organisations, which, even if frivolous, can impose major financial costs and lasting reputational damage. Other relevant people also don’t want to talk about the topic, even if just for their own sanity, and I don’t want to force their hand. In my own case, thinking and talking about this topic feels like fingering an open wound, so I’m sympathetic to their decision.

Elon Musk

Stuart Buck asks:

“[W]hy was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk's purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?”

Sam was interested in investing in Twitter because he thought it would be a good investment; it would be a way of making more money for him to give away, rather than a way of “spending” money. Even prior to Musk being interested in acquiring Twitter, Sam mentioned he thought that Twitter was under-monetised; my impression was that that view was pretty widely-held in the tech world. Sam also thought that the blockchain could address the content moderation problem. He wrote about this here, and talked about it here, in spring and summer of 2022. If the idea worked, it could make Twitter somewhat better for the world, too.

I didn’t have strong views on whether either of these opinions were true. My aim was just to introduce the two of them, and let them have a conversation and take it from there.

On “ingratiating”: Musk has pledged to give away at least half his wealth; given his net worth in 2022, that would amount to over $100B. There was a period of time when it looked like he was going to get serious about that commitment, and ramp up his giving significantly. Whether that money was donated well or poorly would be of enormous importance to the world, and that’s why I was in touch with him. 

How I publicly talked about Sam 

Some people have asked questions about how I publicly talked about Sam, on podcasts and elsewhere. Here is a list of all the occasions I could find where I publicly talked about him.  Though I had my issues with him, especially his overconfidence, overall I was excited by him. I thought he was set to do a tremendous amount of good for the world, and at the time I felt happy to convey that thought. Of course, knowing what I know now, I hate how badly I misjudged him, and hate that I at all helped improve his reputation.

Some people have claimed that I deliberately misrepresented Sam’s lifestyle. In a number of places, I said that Sam planned to give away 99% of his wealth, and in this post, in the context of discussing why I think honest signalling is good, I said, “I think the fact that Sam Bankman-Fried is a vegan and drives a Corolla is awesome, and totally the right call”. These statements represented what I believed at the time. Sam said, on multiple occasions, that he was planning to give away around 99% of his wealth, and the overall picture I had of him was highly consistent with that, so the Corolla seemed like an honest signal of his giving plans.

It’s true that the apartment complex where FTX employees, including Sam, lived, and which I visited, was extremely high-end. But, generally, Sam seemed uninterested in luxury or indulgence, especially for someone worth $20 billion at the time. As I saw it, he would usually cook dinner for himself. He was still a vegan, and I never saw him consume a non-vegan product. He dressed shabbily. He never expressed interest in luxuries. As far as I could gather, he never took a vacation, and rarely even took a full weekend off. On time off he would play chess or video games, or occasionally padel. I never saw him drink alcohol or do illegal drugs.

 The only purchase that I knew of that seemed equivocal was the penthouse. But that was shared with 9 other flatmates, with the living room doubling as an office space, and was used to host company dinners. I did ask Nishad about why they were living in such luxury accommodation: he said that it was nicer than they’d ideally like, but that they were supply constrained in the Bahamas. They wanted to have somewhere that would be attractive enough to make employees move from the US, that would have good security, and that would have a campus feel, and that Albany was pretty much their only option. This seemed credible to me at the time, especially given how strange and cramped their offices were. And even if it was a pure indulgence, the cost to Sam of 1/10th of a $30M penthouse was ~0.01% of his wealth — so, compatible with giving away 99% of what he made. 

After the collapse happened, though, I re-listened to Sam’s appearance on the 80,000 Hours podcast, where he commented that he likes nice apartments, which suggests that there was more self-interest at play than Nishad had made out. And, of course, I don’t know what I didn’t see; I was deceived about many things, so perhaps Sam and others lied about their personal spending, too.

What I heard from former Alameda people 

A number of people have asked about what I heard and thought about the split at early Alameda. I talk about this on the Spencer podcast, but here’s a summary. I’ll emphasise that this is me speaking about my own experience; I’m not speaking for others.

In early 2018 there was a management dispute at Alameda Research. The company had started to lose money, and a number of people were unhappy with how Sam was running the company. They told Sam they wanted to buy him out and that they’d leave if he didn’t accept their offer; he refused and they left. 

I wasn’t involved in the dispute; I heard about it only afterwards. There were claims being made on both sides and I didn’t have a view about who was more in the right, though I was more in touch with people who had left or reduced their investment. That included the investor who was most closely involved in the dispute, who I regarded as the most reliable source.

It’s true that a number of people, at the time, were very unhappy with Sam, and I spoke to them about that. They described him as reckless, uninterested in management, bad at managing conflict, and being unwilling to accept a lower return, instead wanting to double down. In hindsight, this was absolutely a foreshadowing of what was to come. At the time, I believed the view, held by those that left, that Aladema had been a folly project that was going to fail.[1]

As of late 2021, the early Alameda split made me aware that Sam might be difficult to work with. But there are a number of reasons why it didn’t make me think I shouldn’t advise his foundation, or that he might be engaging in fraud. 

The main investor who was involved in the 2018 dispute and negotiations — and who I regarded as largely “on the side” of those who left (though since the collapse they’ve emphasised to me they didn’t regard themselves as “taking sides”) — continued to invest in Alameda, though at a lower amount, after the dispute. This made me think that what was at issue, in the dispute, was whether the company was being well-run and would be profitable, not whether Sam was someone one shouldn’t work with.

The view of those that left was that Alameda was going to fail. When, instead, it and FTX were enormously successful, and had received funding from leading VCs like Blackrock and Sequoia, this suggested that those earlier views had been mistaken, or that Sam had learned lessons and matured over the intervening years. I thought this view was held by a number of people who’d left Alameda; since the collapse I checked with several of those who left, who have confirmed that was their view.[2] 

This picture was supported by actions taken by people who’d previously worked at Alameda. Over the course of 2022, former Alameda employees, investors or advisors with former grievances against Sam did things like: advise Future Fund, work as a Future Fund regranter, accept a grant from Future Fund, congratulate Nick on his new position, trade on FTX, or even hold a significant fraction of their net worth on FTX. People who left early Alameda, including very core people, were asked for advice prior to working for FTX Foundation by people who had offers to work there; as far as I know, none of them advised against working for Sam.

I was also in contact with a few former Alameda people over 2022: as far as I remember, none of them raised concerns to me. And shortly after the collapse, one of the very most core people who left early Alameda, with probably the most animosity towards Sam, messaged me to say that they were as surprised as anyone, that they thought it was reasonable to regard the early Alameda split as a typical cofounder fallout, and that even they had come to think that Alameda and FTX had overcome their early issues and so they had started to trade on FTX.[3][4] 

I wish I’d been able to clear this up as soon as the TIME article was released, and I’m sorry that this means there’s been such a long period of people having question marks about this. There was a failure where at the time I thought I was going to be able to talk publicly about this just a few weeks later, but then that moment in time kept getting delayed. 

  1. ^

    Sam was on the board of CEA US at the time (early 2018). Around that time, after the dispute, I asked the investor that I was in touch with whether Sam should be removed from the board, and the investor said there was no need. A CEA employee (who wasn't connected to Alameda) brought up the idea that Sam should transition off the board, because he didn't help improve diversity of the board, didn't provide unique skills or experience, and that CEA now employed former Alameda employees who were unhappy with him. Over the course of the year that followed, Sam was also becoming busier and less available. In mid-2019, we decided to start to reform the board, and Sam agreed to step down.

  2. ^

    In addition, one former Alameda employee, who I was not particularly in touch with, made the following comment in March 2023. It was a comment on a private googledoc (written by someone other than me), but they gave me permission to share:

    "If you’d asked me about Sam six months ago I probably would have said something like “He plays hardball and is kind of miserable to work under if you want to be treated as an equal, but not obviously more so than other successful business people.” (Think Elon Musk, etc.) 

    "Personally, I’m not willing to be an asshole in order to be successful, but he’s the one with the billions and he comprehensively won on our biggest concrete disagreements so shrug. Maybe he reformed, or maybe this is how you have to be.”

    As far as I was concerned that impression was mostly relevant to people considering working with or for Sam directly, and I shared it pretty freely when that came up.

    Saying anything more negative still feels like it would have been a tremendous failure to update after reality turned out not at all like I thought it would when I left Alameda in 2018 (I thought Alameda would blow up and that FTX was a bad idea which played to none of our strengths).

    Basically I think this and other sections [of the googledoc] are acting like people had current knowledge of bad behaviour which they feared sharing, as opposed to historical knowledge of bad behaviour which tended to be accompanied by doomy predictions that seemed to have been comprehensively proven false. Certainly I had just conceded epistemic defeat on this issue."

  3. ^

    They also thought, though, that the FTX collapse should warrant serious reflection about the culture in EA.

  4. ^

    On an older draft of this comment (which was substantively similar) I asked several people who left Alameda in 2018 (or reduced their investment) to check the above six paragraphs, and they told me they thought the paragraphs were accurate.

Lessons and updates

The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s entry point into EA.

In these comments, I offer my thoughts, but I don’t claim to be the expert on the lessons we should take from this disaster. Sam and the others harmed me and people and projects I love, more than anyone else has done in my life. I was lied to, extensively, by people I thought were my friends and allies, in a way I’ve found hard to come to terms with. Even though a year and a half has passed, it’s still emotionally raw for me: I’m trying to be objective and dispassionate, but I’m aware that this might hinder me.

There are four categories of lessons and updates:

  • Undoing updates made because of FTX
  • Appreciating the new world we’re in 
  • Assessing what changes we could make in EA to make catastrophes like this less likely to happen again
  • Assessing what changes we could make such that EA could handle crises better in the future

On the first two points, the post from Ben Todd is good, though I don’t agree with all of what he says. In my view, the most important lessons when it comes to the first two points, which also have bearing on the third and fourth, are:

  • Against “EA exceptionalism”: without evidence to the contrary, we should assume that people in EA are about average (given their demographics) on traits that don’t relate to EA. Sadly, that includes things like likelihood to commit crimes. We should be especially cautious to avoid a halo effect — assuming that because someone is good in some ways, like being dedicated to helping others, then they are good in other ways, too, like having integrity.  
    • Looking back, there was a crazy halo effect around Sam, and I’m sure that will have influenced how I saw him. Before advising Future Fund, I remember asking a successful crypto investor — not connected to EA — what they thought of him. Their reply was: “He is a god.”
    • In my own case, I think I’ve been too trusting of people, and in general too unwilling to countenance the idea that someone might be a bad actor, or be deceiving me. Given what we know now, it was obviously a mistake to trust Sam and the others, but I think I've been too trusting in other instances in my life, too. I think in particular that I’ve been too quick to assume that, because someone indicates they’re part of the EA team, they are thereby trustworthy and honest. I think that fully improving on this trait will take a long time for me, and I’m going to bear this in mind in which roles I take on in the future. 
  • Presenting EA in the context of the whole of morality. 
    • EA is compatible with very many different moral worldviews, and this ecumenicism was a core reason for why EA was defined as it was. But people have often conflated EA with naive utilitarianism: that promoting wellbeing is the *only* thing that matters.
    • Even on pure utilitarian grounds, you should take seriously the wisdom enshrined in common-sense moral norms, and be extremely sceptical if your reasoning leads you to depart wildly from them. There are very strong consequentialist reasons for acting with integrity and for being cooperative with people with other moral views.
    • But, what’s more, utilitarianism is just one plausible moral view among many, and we shouldn’t be at all confident in it. Taking moral uncertainty into account means taking seriously the consequences of your actions, but it also means respecting common-sense moral prohibitions.[1] 
    • I could have done better in how I’ve communicated on this score. In the past, I’ve emphasised the distinctive aspects of EA, treated the conflation with naive utilitarianism as a confusion that people have, and the response to it as an afterthought, rather than something built into the core of talking about the ideas. I plan to change that, going forward — emphasising more the whole of morality, rather than just the most distinctive contributions that EA makes (namely, that we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests).
  • Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before. 
    • Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive?  Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.
  • Being willing to fight for EA qua EA.
    • FTX has given people an enormous stick to hit EA with, and means that a lot of people have wanted to disassociate from EA. This will result in less work going towards the most important problems in the world today - yet another of the harms that Sam and the others caused. 
    • But it means we’ll need, more than ever, for people who believe that the ideas are true and important to be willing to stick up for them, even in the face of criticism that’s often unfair and uncharitable, and sometimes downright mean. 

On the third point — how to reduce the chance of future catastrophes — the key thing, in my view, is to pay attention to people’s local incentives when trying to predict their behaviour, in particular looking at the governance regime they are in. Some of my concrete lessons, here, are:

  • You can’t trust VCs or the financial media to detect fraud.[2] (Indeed, you shouldn’t even expect VCs to be particularly good at detecting fraud, as it’s often not in their self-interest to do so; I found Jeff Kaufman’s post on this very helpful).
  • The base rates of fraud are surprisingly high (here and here).
  • We should expect the base rate to be higher in poorly-regulated industries.
  • The idea that a company is run by “good people” isn't sufficient to counterbalance that. 
    • In general, people who commit white collar crimes often have good reputations before the crime; this is one of the main lessons from Eugene Soltes’s book Why They Do It
    • In the case of FTX: the fraud was committed by Caroline, Gary and Nishad, as well as Sam. Though some people had misgivings about Sam, I haven’t heard the same about the others. In Nishad’s case in particular, comments I’ve heard about his character are universally that he seemed kind, thoughtful and honest. Yet, that wasn’t enough.
    • (This is all particularly on my mind when thinking about the future behaviour of AI companies, though recent events also show how hard it is to get governance right so that it’s genuinely a check on power.)
  • In the case of FTX, if there had been better aggregation of people’s opinions on Sam that might have helped a bit, though as I note in another comment there was a widespread error in thinking that the 2018 misgivings were wrong or that he’d matured. But what would have helped a lot more, in my view, was knowing how poorly-governed the company was — there wasn’t a functional board, or a risk department, or a CFO.

On how to respond better to crises in the future…. I think there’s a lot. I currently have no formal responsibilities over any community organisations, and do limited informal advising, too,[3] so I’ll primarily let Zach (once he’s back from vacation) or others comment in more depth on lessons learned from this, as well as changes that are being made, and planned to be made, across the EA community as a whole. 

But one of the biggest lessons, for me, is decentralisation, and ensuring that people and organisations to a greater extent have clear separation in their roles and activities than they have had in the past. I wrote about this more here. (Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism. This is because there are gains from “public goods” in the movement that won't be provided by default, and because I think Zach is going to be a strong CEO who can plausibly pull it off.)

In my own case, at the point of time of the FTX collapse, I was:

  • On the board of EV
  • An advisor to Future Fund
  • The most well-known advocate of EA

But once FTX collapsed, these roles interfered with each other. In particular, being on the board of EV and an advisor to Future Fund majorly impacted my ability to defend EA in the aftermath of the collapse and to help the movement try to make sense of what had happened. In retrospect, I wish I’d started building up a larger board for EV (then CEA), and transitioned out of that role, as early as 2017 or 2018; this would have made the movement as a whole more robust.

Looking forward, I’m going to stay off boards for a while, and focus on research, writing and advocacy.

  1. ^

    I give my high-level take on what generally follows from taking moral uncertainty seriously, here: “In general, and very roughly speaking, I believe that maximizing expected choice- worthiness under moral uncertainty entails something similar to a value-pluralist consequentialism-plus-side-constraints view, with heavy emphasis on consequences that impact the long-run future of the human race.”

  2. ^

    There’s a knock against prediction markets, here, too. A Metaculus forecast, in March of 2022 (the end of the period when one could make forecasts on this question), gave a 1.3% chance of FTX making any default on customer funds over the year. The probability that the Metaculus forecasters would have put on the claim that FTX would default on very large numbers of customer funds, as a result of misconduct, would presumably have been lower.

  3. ^

    More generally, I’m trying to emphasise that I am not the “leader” of the EA movement, and, indeed, that I don’t think that the EA movement is the sort of thing that should have a leader. I’m still in favour of EA having advocates (and, hopefully, very many advocates, including people who hopefully get a lot more well-known than I am), and I plan to continue to advocate for EA, but I see that as a very different role. 

Hi Yarrow (and others on this thread) - this topic comes up on the Clearer Thinking podcast, which comes out tomorrow. As Emma Richter mentions, the Clearer Thinking podcast is aimed more at people in or related to EA, whereas Sam Harris's wasn't; it was up to him what topics he wanted to focus on. 

Thanks! Didn't know you're sceptical of AI x-risk. I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally. 

Thanks so much for those links, I hadn't seen them! 

(So much AI-related stuff coming out every day, it's so hard to keep on top of everything!)

Load more