Hide table of contents

The FTX situation is raising a lot of good questions. Could this have been prevented? What warning signs were there, and did people act on them as much as they should have? What steps could be taken to lower the odds of a similar situation in the future?

I want to think hard about questions like these, and I want to have a good (and public) discussion about them. But I don’t want to rush to make sure that happens as fast as possible. (I will continue to communicate things that seem directly and time-sensitively action-relevant; what I don’t want to rush is reflection on what went wrong and what we can learn.)

The overarching reason for this is that I think discussion will be better - more thoughtful, more honest, more productive - to the extent that it happens after the dust has settled a bit. (I’m guessing this will be some number of weeks or months, as opposed to days or years.)

I’m hearing calls from various members of the community to discuss all these issues quickly, and concerns (from people who’d rather not move so quickly) that engaging too slowly could risk losing the trust of the community. As a sort of compromise, I’m rushing out this post on why I disprefer rushing.1My guess is that a number of other people have similar thoughts to mine on this point, but I’ll speak only for myself.

I expect some people to read this as implicitly suggesting that others behave as I do. That’s mostly not right, so I’ll be explicit about my goals. My primary goal is just to explain my own behavior. My secondary goal is to make it easier to understand why some others might be behaving as I am. My third goal is to put some considerations out there that might change some other people’s minds somewhat about what they want to do; but I don’t expect or want everyone to make the same calls I’m making (actually it would be very weird if the EA Forum were quiet right now; that’s not something I wish for).

So, reasons why I mostly expect to stick to cold takes (weeks or months from now) rather than hot takes (days):

I think cold takes will be more intelligent and thoughtful. In general, I find that I have better thoughts on anything after I have a while to process it. In the immediate aftermath of new information, I have tons of quick reactions that tend not to hold up well; they’re often emotion-driven, often overcorrections to what I thought before and overreactions to what others are saying, etc.

Waiting also tends to mean I get to take in a lot more information, and angles from other people, that can affect my thinking. (This is especially the case with other people being so into hot takes!)

It also tends to give me more space for minimal-trust thinking. If I want to form the most accurate possible belief in the heat of the moment, I tend to look to people who have thought more about the matter than I have, and think about which of them I want to bet on and defer to. But if I have more time, I can develop my own models and come to the point where I can personally stand behind my opinions. (In general I’ve been slower than some to adopt ideas like the most important century hypothesis, but I also think I have more detailed understanding and more gut-level seriousness about such ideas than I would’ve if I’d adopted them more quickly and switched from “explore” to “exploit” mode earlier.)

These factors seem especially important for topics like “What went wrong here and what can we learn for the future?” It’s easy to learn the wrong lessons from a new development, and I think the extra info and thought is likely to really pay off.

I think cold takes will pose less risk of doing harm. Right now there is a lot of interest in the FTX situation, and anything I say could get an inordinate number of readers. Some of those readers have zero interest in truth-seeking, and are instead (a) looking to write a maximally juicy story, with truth only as an instrumental goal toward that (at best); or (b) in some cases, actively looking to try to harm effective altruism by putting negative narratives out there that could stick around even if they’re debunked.

If I wait longer, more of those people will have moved on (particularly the ones in category (a), since this won’t be such a hot news topic anymore). And I’ll have more time to consider downsides of my comments and find ways to say what's important while reducing those downsides.

Maybe I should not care about this? Maybe it’s somehow fundamentally wrong to even consider the “PR impact” of things I say? I’ve heard this sentiment at times, but I don’t really understand it.

  • I think that overweighing “PR considerations” can be bad from an integrity perspective (people who care too much about PR can be slippery and deceptive) and can often backfire (I think being less than honest is nearly always a bad PR move).
  • But that doesn’t mean these considerations should be given zero weight.
  • I occasionally hear arguments like “If person X bounces off of [important cause X] because of some media narrative, this just shows that they were a superficial person whom [important cause X] didn’t need anyway.” I may be missing something, but I basically don’t get this point of view at all: there are lots of people who can be helpful with important causes who don’t have the time or dedication to figure everything out for themselves, and for whom media narratives and first impressions matter. (I definitely think this applies to the causes I’m most focused on.)
  • And even if I should put zero weight on these types of considerations, I think this is just unrealistic, akin to trying to work every waking hour or be 100% altruistic or be 100% open with everyone at all times. I do care about bad press, I’m not going to make myself not care, and it seems better to deal with that as a factor in my life than try to white-knuckle myself into ignoring it. If I’m slower and more careful to write up my thoughts, I face less of a tradeoff between truth-seeking and PR considerations. That brings me to the next point.

I think cold takes will be more open and honest. If I’m rushing my writing or trying to avoid harm from bad-faith readers, these are forces pushing away from stating things as I really see them. The same applies to just being unconsciously influenced by the knowledge that what I write will have a particularly large and hard-to-model audience.

To be clear, I try to make all of my public writing open and honest, but this is a matter of effort - not just intention - and I expect to do it better if I have longer. Taking more time means I face fewer distortive forces, and it gives me more chances to reflect and think: “Is this really my take? Do I really stand behind it?”

I’m especially busy right now. There is an awful lot of chaos right now, and a lot of urgent priorities, including setting policies on funding for people affected by the situation, thinking about what our new funding standards should be generally, and deciding what public statements are urgent enough to make. (I note that a lot of other people are similarly dealing with hugely increased workloads right now.) Reflecting on lessons learned is very important in the long run, and I expect to get to it over the coming months, but it’s not currently the most time-sensitive priority.

Bottom line. I’ll continue to put out public comments when I think there’s an especially important, time-sensitive benefit to be had. And I do expect to put out my reflections on this matter (or to endorse someone else’s if they capture enough of mine) sometime in the next few months. But my guess is that my next major public piece will be about AI risk, not FTX. I’ve been working on some AI risk content for a long time.

Notes


  1. Though I do worry that when the smoke has cleared, I’ll look back and think, “Gosh, that message was all wrong - it’s much better to rush out hot takes than to take my time and focus on cold ones. I really regret giving a case against rashness so rashly.” 

Comments33
Sorted by Click to highlight new comments since: Today at 1:45 PM

Here’s a followup with some reflections.

Note that I discuss some takeaways and potential lessons learned in this interview.

Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:

  • The most obvious thing that’s changed is a tighter funding situation, which I addressed here.
  • I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:
    • It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).
    • I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).
  • I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”
  • I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).
  • Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative consequences of things like the Future Fund regranting program; some of it was probably a less rational response to peer pressure. I now feel the case for caution and deliberation in most actions is quite strong - partly because the substantive situation has changed (effective altruism is now enough in the spotlight, and controversial enough, that the costs of further problems seem higher than they did before).
    • On this front, I’ve updated a bit toward my previous self, and more so toward Alexander’s style, in terms of wanting to weigh both explicit risks and vague misgivings significantly before taking notable actions. That said, I think balance is needed and this is only a fairly moderate update, partly because I didn’t update enormously in the other direction before. I think I’m still overall more in favor of moving quickly than I was ~5 years ago, for a number of reasons. In any case I don’t expect there to be a dramatic visible change on this front in terms of Open Philanthropy’s grantmaking, though it might be investing more effort in improving functions like community health.
  • Having seen the EA brand under the spotlight, I now think it isn’t a great brand for wide public outreach. It throws together a lot of very different things (global health giving, global catastrophic risk reduction, longtermism) in a way that makes sense to me but seems highly confusing to many, and puts them all under a wrapper that seems self-righteous and, for lack of a better term, punchable? I still think of myself as an effective altruist and think we should continue to have an EA brand for attracting the sort of people (like myself) who want to put a lot of dedicated, intensive time into thinking about what issues they can work on to do the most good; but I’m not sure this is the brand that will or should attract most of the people who can be helpful on key causes. I think it’s probably good to focus more on building communities and professional networks around specific causes (e.g., AI risk, biorisk, animal welfare, global health) relative to building them around “EA.”
  • I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well. It’s hard to make a clean quantitative statement about how this will change Open Philanthropy's actions, but it’s a factor in how we recently ranked grants. I think it'll be important to do quite a bit more thinking about this (and in particular, to gather more data along these lines) in the longer run.

Thanks for writing this up. I agree with most of these points. However, not with the last one:

I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well.

If anything, I think the dangers and pitfalls of optimization you mention warrant different community building, not less. Specifically, I see two potential dangers to pulling resources out of community building:

  1. Funded community builders would possibly have even stronger incentives to prioritize community growth over sustainable planning, accountability infrastructure, and community health. To my knowledge, CEA's past funding policy incentivized community builders to goodhart on acquiring new talent and funds, at the cost of building sustainable network and structural capital, and at the cost of fostering constructive community norms and practices. As long as one avoided to visibly damage the EA brand or turn the very most talented individuals off, it just was financially unreasonable to pay much attention to these things.
    In other words, the financial incentives so far may have forced community builders into becoming the hard-core utilitarians you are concerned about. And accordingly, they were forced to be role models of hard-core utilitarianism for those they built community for. This may have contributed to EA orthodoxy pre-FTX collapse, where it seemed to me that hard-core utilitarianism was generally considered synonymous to value-alignedness/high status.
    I don't expect this problem to get better if the bar for getting/remaining funded as a community builder gets higher - unless the metrics change significantly.
  2. Access to informal networks would become even more crucial than it already is. If we take money out of community building, we apply optimization pressure away from welcomingness/having low entry barriers to the community. Even more of EA's onboarding and mentorship than is already the case will be tied to informal networks. Junior community members will experience even stronger pressure to try and get invited to the right parties, impress the right people, to become friends and lovers with those who have money and power.

Accordingly, I suspect that the actual answer here is more professionalization, and into a different direction. Specifically:

  • Turning EA community building from a career stepstone into a long-term career, with proper training, financial security, and everything. (CEA already thought of this of course; I don't find the relevant post.)
  • Having more (and more professionalized) community health infrastructure in national and local groups. For example, point people that community members actually know and can talk to in-person.
    CEA's community health team is important, and for all I know, they are doing a fairly impressive job. But I think the bar for reaching out to community health people could be much lower than it currently is. For many community members, CEA's team are just strangers on the internet, and I suspect that all too many new community members (i.e. those most vulnerable to power abuse/harassment/peer pressure) haven't heard of them in the first place.
  • Creating stronger accountability structures in national and local groups, like a board of directors that oversees larger local groups' work without being directly involved in it. (For example, EA Munich recently switched to a board structure, and we are working on that in Berlin ourselves.)
    For this to happen, we would need more experienced and committed people in community building. While technically, a board of directors can be staffed by volunteers entirely, withdrawing funding and prestige from EA community building will make it more difficult to get the necessary number of sufficiently experienced and committed people enrolled.

Thoughts, disagreement?

(Disclaimer on conflict of interest: I'm currently EA Berlin's Community Coordinator and fundraising to turn that into a paid role.)

I have not thought much about this and do not know how far this applies to others (might be worth running a survey) but I very much appreciate the EA community. This is because I am somewhat cause agnostic but have a skillset that might be applied to different causes. Hence, it is very valuable for me to have some community that ties together all these different causes as it makes it easier for me to find work that I might have a good fit for helping out with. In a scenario where EA did not exist, only separate causes (although I think Holden Karnofsky only meant to make less investments in EA, not abandoning the project altogether) I would need to keep updated on perhaps 10 or more separate communities in order to come by relevant opportunities to help.

Thanks for the update! I agree with Nathan that this deserves its own post. 

Re your last point, I always saw SBF/FTX (when things were going well) as a success story relating to E2G/billionaire philanthropy/maximisation/hardcore utilitarianism/risk-taking/etc. I feel these are the factors that make SBF's case distinctive, and the connection to community building is more tenuous. 

This being the case, the whole thing has updated me away from those things, but it hasn't really updated my view on community building (other than that we should be doing things more in line with Ord's opening address at EAG Bay Area).

I'm surprised you see things differently and would be interested in hearing why that is :) 

Maybe I'm just biased because I'm a professional community builder!  

Thanks for writing this.

It feels off to me that this is a forum reply. Seems like it is important enough that it should be a post and then showed to people in accordance with that. 

Hey Holden,

Thanks for these reflections!

Could you maybe elaborate on what you mean by a 'bad actor'? There's some part of me that feels nervous about this as a framing, at least without further specification -- like maybe the concept could be either applied too widely (e.g. to anyone who expresses sympathy with "hard-core utilitarianism", which I'd think wouldn't be right), or have a really strict definition (like only people with dark tetrad traits) in a way that leaves out people who might be likely to (or: have the capacity to?) take really harmful actions.

To give a rough idea, I basically mean anyone who is likely to harm those around them (using a common-sense idea of doing harm) and/or "pollute the commons" by having an outsized and non-consultative negative impact on community dynamics. It's debatable what the best warning signs are and how reliable they are.

Thoughts on “maximisation is perilous”:

(1) We could put more emphasis on the idea of “two-thirds utilitarianism”.

(2) I expect we could come up with a better name for two-thirds utilitarianism and a snappier way of describing the key thought. Deep pragmatism might work.

(I made these webpages a couple days after the FTX collapse. Buying domains is cheaper than therapy…)

Thanks so much for these reflections. Would you consider saying more about which other actions seem most promising to you, beyond articulating a robust case against "hard-core utilitarianism" and improving the community's ability to identify and warn about bad actors? For the reasons I gave here, I think it would be valuable for leaders in the EA community to be talking much more concretely about opportunities to reduce the risk that future efforts inspired by EA ideas might cause unintended harm.

I laughed as I agreed about the "punchable" comment. Certainly, as a non STEM individual much of EA seems punchable to me, SBF's face in particular should inspire a line of punching bags embroidered with it. 

But for this to lead you to downgrade EA community building seems like wildly missing the point, which is to be less punchable, ie. more "normal", "likable", "relatable to average people".  I say this from huge experience in movement building...the momentum and energy a movement like EA creates is tremendous and may even lead to saving the world, and it is simply a movement that has reached a maturation way point that uncovers common normal problems like when you show up to your first real job and discover your college kid cultural mindset needs an update.  

The problem is not EA community building, it is getting seduced by billionaire/elite/elon culture and getting sucked into it like Clinton hanging out with Epstein...Oops. Don't reduce growth energy to a rare energetic movement, just fix whatever sucked people in towards the big money. Said with much love and respect for all you and the early EA pioneers have done. I've seen movements falter, trip and fall...don't do that. Learn and adjust, but do not pull back. EA community building is literally the living body, you can't stop feeding it. 

Wei Dai
1y105
29
4

Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:

  1. Inspired/funded by EA
  2. Taking big risks with other people's lives/money
  3. Attempt at regulatory capture
  4. Large employee exodus due to safety/ethics/governance concerns
  5. Lack of public details of concerns due in part to non-disparagement agreements

3. Attempt at regulatory capture

I followed this link, but I don't understand what it has to do with regulatory capture. The linked thread seems to be about nepotistic hiring and conflicts of interest at/around OpenAI.

Ofer
1y12
5
2

OpenPhil recommended a $30M grant to OpenAI in a deal that involved the OP (then-CEO of OpenPhil) becoming a board member of OpenAI. This occurred no later than March 2017. Later, OpenAI appointed both the OP's then-fiancée and the fiancée’s sibling to VP positions. See these two LinkedIn profiles and the "Relationship disclosures" section in this OpenPhil writeup.

It seems plausible that there was a causal link between the $30M grant and the appointment of the fiancée and her sibling to VP positions. OpenAI may have made these appointments while hoping to influence the OP's behavior in his capacity as a board member of OpenAI who was seeking to influence safety and governance matters, as indicated in the following excerpt from OpenPhil's writeup:

[...] the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.

Less importantly, see 30 seconds from this John Oliver monologue as evidence that companies sometimes suspiciously employ family members of regulators.

Thanks for explaining, but who are you considering to be the "regulator" who is "captured" in this story? I guess you are thinking of either OpenPhil or OpenAI's board as the "regulator" of OpenAI. I've always heard the term "regulatory capture" in the context of companies capturing government regulators, but I guess it makes sense that it could be applied to other kinds of overseers of a company, such as its board or funder.

who are you considering to be the "regulator" who is "captured" in this story?

In the regulatory capture framing, the person who had a role equivalent to a regulator was the OP who joined OpenAI's Board of Directors as part of an OpenPhil intervention to mitigate x-risks from AI. (OpenPhil publicly stated their motivation to "help play a role in OpenAI's approach to safety and governance issues" in their writeup on the $30M grant to OpenAI).

An important difference is that OpenAI has been distancing itself from EA after the Anthropic split

 

 

I don’t believe #1 is correct. The Open Philanthropy grant is a small fraction of the funding OpenAI has received, and I don’t think it was crucial for OpenAI at any point.

I think #2 is fair insofar as running a scaling lab poses big risks to the world. I hope that OpenAI will avoid training or deploying directly dangerous systems; I think that even the deployments it’s done so far pose risks via hype and acceleration. (Considering the latter a risk to society is an unusual standard to hold a company to, but I think it’s appropriate here.)

#3 seems off to me - “regulatory capture” does not describe what’s at the link you gave (where’s the regulator?) At best it seems like a strained analogy, and even there it doesn’t seem right to me - I don’t know of any sense in which I or anyone else was “captured” by OpenAI.

I can’t comment on #4.

#5 seems off to me. I don’t know whether OpenAI uses nondisparagement agreements; I haven’t signed one. The reason I am careful with public statements about OpenAI is (a) it seems generally unproductive for me to talk carelessly in public about important organizations (likely to cause drama and drain the time and energy of me and others); (b) I am bound by confidentiality requirements, which are not the same as nondisparagement requirements. Information I have access to via having been on the board, or via being married to a former employee, is not mine to freely share.

Honestly, I’m happy with this compromise. I want to hear more about what ‘leadership’ is thinking, but I also understand the constraints you all have.

This obviously doesn’t answer the questions people have, but at least communicating this instead of radio silence is very much appreciated. For me at least, it feels like it helps reduce feelings of disconnectedness and makes the situation a little less frustrating.

Strongly agree here. Simply engaging with the community seems far better than silence. I think the object level details of FTX are less important than making the community not feel like it has been thrown to the wolves. 

I remember the first 24 hours, I was seriously spooked by the quiet. I had no idea that there were going to be hostile lawyers and journalists swarming all over the place, combing the crisis for slips of the tongue to take out of context. Politicians might even join in after the dust settles from the election and the status of the deposits become clear.

EA "leadership" was not optimized to handle this sort of thing, whereas conventional charities optimize for that risk by default e.g. dumping all their bednets in a random village in order to cut costs, so if people look then they can honestly say that they minimized overhead and maximized bednets per dollar.

Thank you for writing this - strong +1. At 80k we are going to be thinking carefully about what this means for our career advice and our ways of communicating - how this should change things and what we should do going forward. But there’s a decent amount we still don’t know and it will also just take time to figure that all out.

It feels like we've just gotten a load of new information, and there’s probably more coming, and I am in favour of updating on things carefully.

Holden - thanks very much for writing this; I strongly agree with the importance of patience, fact-gathering, wisdom, and cold takes. 

During a PR crisis, often the best communication strategy is not to communicate, and to let the media attention die down and move on to the next monetizable outrage narrative about somebody else or some other group that's allegedly done something wrong.

I would add just three supporting points.

First, hot takes tend to provoke hot counter-takes, leading to cycles of accusations and counter-accusations.  When a movement undergoes a moral crisis, and seems guilt-stricken, self-critical, and full of self-doubt, old grievances suddenly get aired, in hopes that the movement's members will be more vulnerable to various forms of moral blackmail, and will change their policies, norms, and ethos under conditions of high emotionality and time pressure. The hot takes and hot counter-takes can also escalate into clannish fractures and ideological schisms in the movement. In other words, any individual hot take might seem innocuous, but collectively, a barrage of hot takes flying in all directions can have huge negative side-effects on a movement's social cohesiveness and moral integrity, and can lead to changes that seem urgent and righteous in the short-term, but that have big hidden costs in the long term.

Second, any hot takes that are shared on EA Forum are in the public domain, and can be quoted by any journalist, pundit, muckraker, blogger, YouTuber, or grievance-holder, for any reason, to push any narrative they want. We are used to EA Forum seeming like a cozy, friendly, in-group medium for open and honest discussions. But in the present circumstances, we may need to treat EA Forum as a de facto EA public relations outlet in its own right. Everything we say on here can be taken, quoted out of context, misrepresented, and spun, by anybody out there who's hostile to EA. Thus, when writing our hot takes here, we might naively imagine the audience being the average EA reader -- rational, kind, constructive, sympathetic. But there's the tail risk that any given hot take will be weaponized by non-EAs to hurt EA in any way they can.

Third, some EA people seem to misunderstand the nature of PR issues, media narratives, and the 'brand equity' of social/activist/moral movements like EA. Certainly, as Holden notes, 'people who care too much about PR can be slippery and deceptive'. Many people outside the professions of PR, crisis management, market researcher, advertising, political campaigning, etc tend to view 'public relations' as very nebulous, vague, and unreal -- the realm of sociopathic mind-control wizards. 

However, public sentiment can be measured, quantified, analyzed, and influenced. Literally tens of thousands of people in market research do this all day, every day, for corporations, governments, activist movements, etc. There are facts of the matter about public perception of EA as a moral/social brand. Some actual number of people have heard about EA for the first time in the last few days -- maybe tens of millions. Some specific  % of them will have formed a negative, neutral, or positive impression of EA. Any negative impressions of EA will last an average of X days, weeks, or years. They will be Y% less (or more) likely to get involved in EA, or to donate money to EA. We don't know what those numbers actually are (though we should probably spend a bit of money on market research to find out how bad the damage has actually been.) 

There's a psychological reality to public sentiment -- however tricky it can be to measure, and however transient its effects can be. Most of us are amateurs  when it comes to thinking about PR. But it's better to recognize that we're newbies with a lot that we need to learn -- rather than dismissing PR concerns as beneath contempt.

Meta note: You've had a lot of sober and interesting things to say on the EA Forum, Geoffrey, and I've been appreciating having you around for these conversations. :)

(It sounds like I'm more pro-hot-takes and less PR-concerned than you and Holden, and I may write more about that in the future, but I'll ironically need to think about it longer in order to properly articulate my views.)

Rob -- I appreciate your comment; thank you! 

Look forward to whatever you have to say, in due course.

I hope I'm not tempting fate here, but I'm quite surprised I haven't already seen EA Forums quoted "out there" during the present moment. I can only imagine outsiders have more juicy things to focus on than this forum, for the moment. I suppose once they tire of FTX/Alameda leaders' blogs and other sources they might wander over here for some dirt.

A few days ago, someone noted a couple of instances and someone else has just noted another.

I'm commenting here to say that while I don't plan to participate in public discussion of the FTX situation imminently (for similar reasons to the ones Holden gives above, though I don't totally agree with some of Holden's explanations here, and personally put more weight on some considerations here than others), I am planning to do so within the next several months. I'm sorry for how frustrating that is, though I endorse my choice.

Frankly, I'm pretty disturbed by how fast things are going and how quick people were to demand public hearings. Over the last 20 years. this sort of thing happens it extremely bad situations, and in a surprisingly large proportion of them, the calls for upheaval were being deliberately and repeatedly sparked by a disproportionately well-resourced vocal minority.

Can you clarify which "public hearings" were demanded? Not sure if you're talking about how quickly the bankruptcy process has been moving at FTX, or how the reactions from people on EA Forum since the news about FTX started.

Thanks for sharing your thoughts on this, especially your points on PR updated me a bit towards taking PR more seriously.

One piece of pushback on your overall message is that I think there are different kinds of communications than cold or hot takes (which I understand as more or less refined assessments of the situation and its implications). One can:

  • share what one is currently doing about the situation,
  • share details that help others figure out what can be learned from this,
    • (this sometimes might require some bravery and potentially making oneself susceptible to legal risks, but I'd guess that for many who can share useful info it wouldn't?)
  • share your feelings about the crisis.

I'm overall feeling like contributing to the broader truth-seeking process is a generally cooperative and laudable move,  and that it's often relatively easy to share relatively robust assessments that one is unlikely to have to backtrack, such as those I've seen from MacAskill, Wiblin, Sam Harris.

For example I really appreciated Oliver Habryka reflecting publicly about his potential role in this situation by not sufficiently communicating his impression of SBF. I expect Habryka giving this "take" and the associated background info will not prove wrong-headed in 3 months, and it didn't seem driven by hot emotions or an overreaction to me.

I'm waiting to read your take, especially since the conflict of interest issue has come up with FTX and you seem to have some conflicts of interest especially when it comes to AI safety funding . I'm curious how you have managed to stay unbiased.