The FTX situation is raising a lot of good questions. Could this have been prevented? What warning signs were there, and did people act on them as much as they should have? What steps could be taken to lower the odds of a similar situation in the future?

I want to think hard about questions like these, and I want to have a good (and public) discussion about them. But I don’t want to rush to make sure that happens as fast as possible. (I will continue to communicate things that seem directly and time-sensitively action-relevant; what I don’t want to rush is reflection on what went wrong and what we can learn.)

The overarching reason for this is that I think discussion will be better - more thoughtful, more honest, more productive - to the extent that it happens after the dust has settled a bit. (I’m guessing this will be some number of weeks or months, as opposed to days or years.)

I’m hearing calls from various members of the community to discuss all these issues quickly, and concerns (from people who’d rather not move so quickly) that engaging too slowly could risk losing the trust of the community. As a sort of compromise, I’m rushing out this post on why I disprefer rushing.1My guess is that a number of other people have similar thoughts to mine on this point, but I’ll speak only for myself.

I expect some people to read this as implicitly suggesting that others behave as I do. That’s mostly not right, so I’ll be explicit about my goals. My primary goal is just to explain my own behavior. My secondary goal is to make it easier to understand why some others might be behaving as I am. My third goal is to put some considerations out there that might change some other people’s minds somewhat about what they want to do; but I don’t expect or want everyone to make the same calls I’m making (actually it would be very weird if the EA Forum were quiet right now; that’s not something I wish for).

So, reasons why I mostly expect to stick to cold takes (weeks or months from now) rather than hot takes (days):

I think cold takes will be more intelligent and thoughtful. In general, I find that I have better thoughts on anything after I have a while to process it. In the immediate aftermath of new information, I have tons of quick reactions that tend not to hold up well; they’re often emotion-driven, often overcorrections to what I thought before and overreactions to what others are saying, etc.

Waiting also tends to mean I get to take in a lot more information, and angles from other people, that can affect my thinking. (This is especially the case with other people being so into hot takes!)

It also tends to give me more space for minimal-trust thinking. If I want to form the most accurate possible belief in the heat of the moment, I tend to look to people who have thought more about the matter than I have, and think about which of them I want to bet on and defer to. But if I have more time, I can develop my own models and come to the point where I can personally stand behind my opinions. (In general I’ve been slower than some to adopt ideas like the most important century hypothesis, but I also think I have more detailed understanding and more gut-level seriousness about such ideas than I would’ve if I’d adopted them more quickly and switched from “explore” to “exploit” mode earlier.)

These factors seem especially important for topics like “What went wrong here and what can we learn for the future?” It’s easy to learn the wrong lessons from a new development, and I think the extra info and thought is likely to really pay off.

I think cold takes will pose less risk of doing harm. Right now there is a lot of interest in the FTX situation, and anything I say could get an inordinate number of readers. Some of those readers have zero interest in truth-seeking, and are instead (a) looking to write a maximally juicy story, with truth only as an instrumental goal toward that (at best); or (b) in some cases, actively looking to try to harm effective altruism by putting negative narratives out there that could stick around even if they’re debunked.

If I wait longer, more of those people will have moved on (particularly the ones in category (a), since this won’t be such a hot news topic anymore). And I’ll have more time to consider downsides of my comments and find ways to say what's important while reducing those downsides.

Maybe I should not care about this? Maybe it’s somehow fundamentally wrong to even consider the “PR impact” of things I say? I’ve heard this sentiment at times, but I don’t really understand it.

  • I think that overweighing “PR considerations” can be bad from an integrity perspective (people who care too much about PR can be slippery and deceptive) and can often backfire (I think being less than honest is nearly always a bad PR move).
  • But that doesn’t mean these considerations should be given zero weight.
  • I occasionally hear arguments like “If person X bounces off of [important cause X] because of some media narrative, this just shows that they were a superficial person whom [important cause X] didn’t need anyway.” I may be missing something, but I basically don’t get this point of view at all: there are lots of people who can be helpful with important causes who don’t have the time or dedication to figure everything out for themselves, and for whom media narratives and first impressions matter. (I definitely think this applies to the causes I’m most focused on.)
  • And even if I should put zero weight on these types of considerations, I think this is just unrealistic, akin to trying to work every waking hour or be 100% altruistic or be 100% open with everyone at all times. I do care about bad press, I’m not going to make myself not care, and it seems better to deal with that as a factor in my life than try to white-knuckle myself into ignoring it. If I’m slower and more careful to write up my thoughts, I face less of a tradeoff between truth-seeking and PR considerations. That brings me to the next point.

I think cold takes will be more open and honest. If I’m rushing my writing or trying to avoid harm from bad-faith readers, these are forces pushing away from stating things as I really see them. The same applies to just being unconsciously influenced by the knowledge that what I write will have a particularly large and hard-to-model audience.

To be clear, I try to make all of my public writing open and honest, but this is a matter of effort - not just intention - and I expect to do it better if I have longer. Taking more time means I face fewer distortive forces, and it gives me more chances to reflect and think: “Is this really my take? Do I really stand behind it?”

I’m especially busy right now. There is an awful lot of chaos right now, and a lot of urgent priorities, including setting policies on funding for people affected by the situation, thinking about what our new funding standards should be generally, and deciding what public statements are urgent enough to make. (I note that a lot of other people are similarly dealing with hugely increased workloads right now.) Reflecting on lessons learned is very important in the long run, and I expect to get to it over the coming months, but it’s not currently the most time-sensitive priority.

Bottom line. I’ll continue to put out public comments when I think there’s an especially important, time-sensitive benefit to be had. And I do expect to put out my reflections on this matter (or to endorse someone else’s if they capture enough of mine) sometime in the next few months. But my guess is that my next major public piece will be about AI risk, not FTX. I’ve been working on some AI risk content for a long time.

Notes


  1. Though I do worry that when the smoke has cleared, I’ll look back and think, “Gosh, that message was all wrong - it’s much better to rush out hot takes than to take my time and focus on cold ones. I really regret giving a case against rashness so rashly.” 

396

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 9:37 AM

Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:

  1. Inspired/funded by EA
  2. Taking big risks with other people's lives/money
  3. Attempt at regulatory capture
  4. Large employee exodus due to safety/ethics/governance concerns
  5. Lack of public details of concerns due in part to non-disparagement agreements

3. Attempt at regulatory capture

I followed this link, but I don't understand what it has to do with regulatory capture. The linked thread seems to be about nepotistic hiring and conflicts of interest at/around OpenAI.

OpenPhil recommended a $30M grant to OpenAI in a deal that involved the OP (then-CEO of OpenPhil) becoming a board member of OpenAI. This occurred no later than March 2017. Later, OpenAI appointed both the OP's then-fiancée and the fiancée’s sibling to VP positions. See these two LinkedIn profiles and the "Relationship disclosures" section in this OpenPhil writeup.

It seems plausible that there was a causal link between the $30M grant and the appointment of the fiancée and her sibling to VP positions. OpenAI may have made these appointments while hoping to influence the OP's behavior in his capacity as a board member of OpenAI who was seeking to influence safety and governance matters, as indicated in the following excerpt from OpenPhil's writeup:

[...] the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.

Less importantly, see 30 seconds from this John Oliver monologue as evidence that companies sometimes suspiciously employ family members of regulators.

Thanks for explaining, but who are you considering to be the "regulator" who is "captured" in this story? I guess you are thinking of either OpenPhil or OpenAI's board as the "regulator" of OpenAI. I've always heard the term "regulatory capture" in the context of companies capturing government regulators, but I guess it makes sense that it could be applied to other kinds of overseers of a company, such as its board or funder.

who are you considering to be the "regulator" who is "captured" in this story?

In the regulatory capture framing, the person who had a role equivalent to a regulator was the OP who joined OpenAI's Board of Directors as part of an OpenPhil intervention to mitigate x-risks from AI. (OpenPhil publicly stated their motivation to "help play a role in OpenAI's approach to safety and governance issues" in their writeup on the $30M grant to OpenAI).

An important difference is that OpenAI has been distancing itself from EA after the Anthropic split

 

 

Honestly, I’m happy with this compromise. I want to hear more about what ‘leadership’ is thinking, but I also understand the constraints you all have.

This obviously doesn’t answer the questions people have, but at least communicating this instead of radio silence is very much appreciated. For me at least, it feels like it helps reduce feelings of disconnectedness and makes the situation a little less frustrating.

Strongly agree here. Simply engaging with the community seems far better than silence. I think the object level details of FTX are less important than making the community not feel like it has been thrown to the wolves. 

I remember the first 24 hours, I was seriously spooked by the quiet. I had no idea that there were going to be hostile lawyers and journalists swarming all over the place, combing the crisis for slips of the tongue to take out of context. Politicians might even join in after the dust settles from the election and the status of the deposits become clear.

EA "leadership" was not optimized to handle this sort of thing, whereas conventional charities optimize for that risk by default e.g. dumping all their bednets in a random village in order to cut costs, so if people look then they can honestly say that they minimized overhead and maximized bednets per dollar.

Thank you for writing this - strong +1. At 80k we are going to be thinking carefully about what this means for our career advice and our ways of communicating - how this should change things and what we should do going forward. But there’s a decent amount we still don’t know and it will also just take time to figure that all out.

It feels like we've just gotten a load of new information, and there’s probably more coming, and I am in favour of updating on things carefully.

Holden - thanks very much for writing this; I strongly agree with the importance of patience, fact-gathering, wisdom, and cold takes. 

During a PR crisis, often the best communication strategy is not to communicate, and to let the media attention die down and move on to the next monetizable outrage narrative about somebody else or some other group that's allegedly done something wrong.

I would add just three supporting points.

First, hot takes tend to provoke hot counter-takes, leading to cycles of accusations and counter-accusations.  When a movement undergoes a moral crisis, and seems guilt-stricken, self-critical, and full of self-doubt, old grievances suddenly get aired, in hopes that the movement's members will be more vulnerable to various forms of moral blackmail, and will change their policies, norms, and ethos under conditions of high emotionality and time pressure. The hot takes and hot counter-takes can also escalate into clannish fractures and ideological schisms in the movement. In other words, any individual hot take might seem innocuous, but collectively, a barrage of hot takes flying in all directions can have huge negative side-effects on a movement's social cohesiveness and moral integrity, and can lead to changes that seem urgent and righteous in the short-term, but that have big hidden costs in the long term.

Second, any hot takes that are shared on EA Forum are in the public domain, and can be quoted by any journalist, pundit, muckraker, blogger, YouTuber, or grievance-holder, for any reason, to push any narrative they want. We are used to EA Forum seeming like a cozy, friendly, in-group medium for open and honest discussions. But in the present circumstances, we may need to treat EA Forum as a de facto EA public relations outlet in its own right. Everything we say on here can be taken, quoted out of context, misrepresented, and spun, by anybody out there who's hostile to EA. Thus, when writing our hot takes here, we might naively imagine the audience being the average EA reader -- rational, kind, constructive, sympathetic. But there's the tail risk that any given hot take will be weaponized by non-EAs to hurt EA in any way they can.

Third, some EA people seem to misunderstand the nature of PR issues, media narratives, and the 'brand equity' of social/activist/moral movements like EA. Certainly, as Holden notes, 'people who care too much about PR can be slippery and deceptive'. Many people outside the professions of PR, crisis management, market researcher, advertising, political campaigning, etc tend to view 'public relations' as very nebulous, vague, and unreal -- the realm of sociopathic mind-control wizards. 

However, public sentiment can be measured, quantified, analyzed, and influenced. Literally tens of thousands of people in market research do this all day, every day, for corporations, governments, activist movements, etc. There are facts of the matter about public perception of EA as a moral/social brand. Some actual number of people have heard about EA for the first time in the last few days -- maybe tens of millions. Some specific  % of them will have formed a negative, neutral, or positive impression of EA. Any negative impressions of EA will last an average of X days, weeks, or years. They will be Y% less (or more) likely to get involved in EA, or to donate money to EA. We don't know what those numbers actually are (though we should probably spend a bit of money on market research to find out how bad the damage has actually been.) 

There's a psychological reality to public sentiment -- however tricky it can be to measure, and however transient its effects can be. Most of us are amateurs  when it comes to thinking about PR. But it's better to recognize that we're newbies with a lot that we need to learn -- rather than dismissing PR concerns as beneath contempt.

Meta note: You've had a lot of sober and interesting things to say on the EA Forum, Geoffrey, and I've been appreciating having you around for these conversations. :)

(It sounds like I'm more pro-hot-takes and less PR-concerned than you and Holden, and I may write more about that in the future, but I'll ironically need to think about it longer in order to properly articulate my views.)

Rob -- I appreciate your comment; thank you! 

Look forward to whatever you have to say, in due course.

I hope I'm not tempting fate here, but I'm quite surprised I haven't already seen EA Forums quoted "out there" during the present moment. I can only imagine outsiders have more juicy things to focus on than this forum, for the moment. I suppose once they tire of FTX/Alameda leaders' blogs and other sources they might wander over here for some dirt.

A few days ago, someone noted a couple of instances and someone else has just noted another.

I'm commenting here to say that while I don't plan to participate in public discussion of the FTX situation imminently (for similar reasons to the ones Holden gives above, though I don't totally agree with some of Holden's explanations here, and personally put more weight on some considerations here than others), I am planning to do so within the next several months. I'm sorry for how frustrating that is, though I endorse my choice.

Frankly, I'm pretty disturbed by how fast things are going and how quick people were to demand public hearings. Over the last 20 years. this sort of thing happens it extremely bad situations, and in a surprisingly large proportion of them, the calls for upheaval were being deliberately and repeatedly sparked by a disproportionately well-resourced vocal minority.

Can you clarify which "public hearings" were demanded? Not sure if you're talking about how quickly the bankruptcy process has been moving at FTX, or how the reactions from people on EA Forum since the news about FTX started.

Thanks for sharing your thoughts on this, especially your points on PR updated me a bit towards taking PR more seriously.

One piece of pushback on your overall message is that I think there are different kinds of communications than cold or hot takes (which I understand as more or less refined assessments of the situation and its implications). One can:

  • share what one is currently doing about the situation,
  • share details that help others figure out what can be learned from this,
    • (this sometimes might require some bravery and potentially making oneself susceptible to legal risks, but I'd guess that for many who can share useful info it wouldn't?)
  • share your feelings about the crisis.

I'm overall feeling like contributing to the broader truth-seeking process is a generally cooperative and laudable move,  and that it's often relatively easy to share relatively robust assessments that one is unlikely to have to backtrack, such as those I've seen from MacAskill, Wiblin, Sam Harris.

For example I really appreciated Oliver Habryka reflecting publicly about his potential role in this situation by not sufficiently communicating his impression of SBF. I expect Habryka giving this "take" and the associated background info will not prove wrong-headed in 3 months, and it didn't seem driven by hot emotions or an overreaction to me.

I'm waiting to read your take, especially since the conflict of interest issue has come up with FTX and you seem to have some conflicts of interest especially when it comes to AI safety funding . I'm curious how you have managed to stay unbiased.