This is a linkpost for https://docs.google.com/document/d/1FbEninDsMR6nMJr8EUc3snDn6MFMBFS_xVGVIBJxhp0/edit
Some quick notes
- Owen Cotton Barratt has written a post of reflections about FTX
- There is a discussion to be had about whether Owen’s content should be on the forum or indeed further discussion of the whole situation, feel free to have that discussion here. The mod team has suggested (and I cautiously endorse) having a dedicated comment thread on this post for meta-discussion about Owen, details below.
- I think this could be seen as soft rehabilitation. I don’t endorse that
- As elsewhere I think it may be helpful to split up thoughts and feelings, personally I think my feelings do not automatically translate into a need for action. Feelings are important, but they are only part of good arguments.
- Edited to make these comments shorter (I thought this would be more controversial than it seemingly)
[A note from the moderation team] We realize that some people might want to discuss how to process this post in light of Owen's recent statement and apology. But we also want to give space to object-level discussion of the contents of the post, and separate those out somewhat. So we ask that you avoid commenting on Owen's recent apology anywhere but in this thread. New top-level comments (and responses to them) should focus on the contents of the post; if they don't, we'll move them to said thread.
Here’s just the headings from the updates + implications sections, lightly reformatted. I don’t necessarily agree with all/any of it (same goes for my employer).
Updates
Factual updates (the world is now different, so the best actions are different)
Epistemic updates (beliefs about the world I wish I’d had all along, that I discovered in processing this evidence)
Implications
Implications for object level work:
Implications for community-building activities:
Implications for central community coordination:
Implications for governance:
This must be one of the best FTX reflections I've heard for avoiding the all-too-tempting 'us vs them' / 'goodies vs baddies' mindset.
And I think the assumption that 'almost no one is evil; almost everything is broken' generally leads to both much more accurate takes and much more openness to cooperation moving forward.
This feels like a confusing update, largely due to "by the book" being a vague phrase.
I agree that FTX's "move fast and break things" culture seems to be upstream of some of their problems, and this is a point in favor of, for example, separating the bank accounts of legally distinct entities (which is a "by the book" practice).
But if I imagine a world in which EA organizations caught/prevented the FTX issues, I think this would have required a bunch of non-"by the book" work. I think very few nonprofits would do the kind of due diligence that would have been required to uncover this level of fraud in one of their donors, and if EA's had been compelled to do so it would have been through arguments like "'by the book' behavior underrates the severity of tail risks so therefore you should do this weird stuff despite it being costly and different from 'the book.'" So I think you could make the opposite update, for a reasonable definition of "do things by the book".
One concrete example from your document: you say that an implication of the update that we should do more things "by the book" is that we should do more impact evaluations. I think impact evaluations are great, but I'm confused what "book" you are referring to which states that nonprofits should do more impact evaluation. Normal nonprofits do ~0 impact eval. Arguably all of EA is a reaction to the fact that normal nonprofits do ~0 impact eval. If you think we should do more impact evals then this does not seem to be justifiable by an appeal to what normal/best practice/"by the book" nonprofits do.
Thanks for writing and sharing this. I feel confused about the scale on some of these. For example:
Pre-FTX collapse, an EA was the richest self-made person in the world under 30 after having switched to an industry they had no experience in (crypto), and this is evaluated as 5/10 ability to do well in arbitrary industries? What is the scale here? What are social groups that are 10/10? Is the claim that Sam was 10/10 but the rest of us were 1/10 so it averages out to 5/10? But weren't something like >0.1% of EAs billionaires at that point? Surely that's at like the 99th percentile of billionaire density for social groups?
And we are currently 3/10? The fact that most EA's have the right to work in an OECD country seems like it automatically should put us at at least the 80th percentile or something?
Maybe this is pedantic, but this lack of precision on the scale makes it hard for me to interpret the strength of these updates. E.g. if Owen is making the claim that EA's have below-average ability amongst humans to accomplish arbitrary business goals (which is my naïve interpretation of the 3/10 rating) then this seems like a really strong and surprising claim.
(Note: I'm more sympathetic to the claim that EAs do things that are impactful in absolute value but have a negative sign, so the overall net impact is lower. But I understand Owen to be talking about a different thing here.)
I read the endpoints as 0 is no extraordinary abilities at all (i.e., similiar to most groups with similiar characteristics like education and geography) and 10 as a group of Nobel prize winners or something.
Hey everyone, we (the moderators) realize that seeing this post on the Forum may cause some negative emotions and that it might be worth discussing how to process this post in light of his recent apology. But we also want to give space to object-level discussion of the contents of the post, and separate those out somewhat. So we ask that you avoid commenting on Owen's recent apology anywhere but in this thread. New top-level comments (and responses to them) should focus on the contents of the post; if they don't, we'll move them to this thread.
We're not doing this to prevent any specific conversations from happening, but rather because we hope that separating the conversations out will make them go better.
We've never tried this setup before but might try it again for important (and difficult) conversations depending on how it goes here.
Nathan (or anyone else) I doubt I will read the whole google doc but would appreciate a ~1 page summary, and I think that would be appropriate to include in this post (with, of course, a disclaimer that it is your gloss on what Owen is saying)
Here's my attempt (just copy pasting the bolded lines, as Ivy suggests).
Nice job, I was super happy to notice that :) :thumb:
As someone who read the whole piece, I think you could just read the bolded lines and read the explanatory bits below for those lines you find interesting/key. It's also already an outline, so you could just read the bullets further to the left, and read the further-right bits as your curiosity and ethical compass direct you. Reading the left-ward bits can always be assumed to function as a summary of an outline (and author's fault if it doesn't).
[EDIT: This is what Angelina Li did above, nice :) Hopefully if anyone finds any bit intriguing, they go read more in the source :)
The rest is me reflecting on EAs and appropriateness of summaries vs different types of info-gleaning]
I'm not confident that summarizing pieces like this for an EA audience [like, typical summary paragraph-style] really works tbh. Different EAs will need very different things from it. Eg, community builders will be way more interested in the CB section and want to read it in detail even if they disagree, so as to understand the modes of thinking that others might adopt and what they might want to refute or adapt to.
This is also, after all, just someone's personal reflections and won't necessarily be the way EAs move forward on any of these things. And for reflections, summaries often cut reasoning and therefore lead to information cascades that need to be addressed later, I think. We already have way too much deference and information cascades in EA anyway, so I'd rather see people lean more toward engaging with material semi-deeply that is relevant to them or not repeat ideas at all tbh. This leads me to say that each reader should be proactive [by reading the bolded/leftward parts of the the outline themselves], and try to sort out the bits they care about or want to improve their thinking on and read anything further on that carefully.
It's totally okay to say "this isn't really my bag, and I trust others to get it right eventually, so I'm not gonna engage with this". And if you don't trust others to get it right eventually (and the FTX debacle is certainly around a low-trust theme), I still think EAs should engage semi-deeply (enough to evaluate trust in others or actually do the better job yourself) or hardly at all (even if this means pulling back from EA til you have the spoons to check-in deeply on your concerns), because engaging lightly will probably only waste your time, confuse discussion, and waste the time of others if they retroactively have to correct misunderstandings that spread thanks to poor-quality/surface-level engagement. [I've gone on a long time which makes it sound like a big ask, but honestly I am just talking about semi-deep engagement (eg, reading the leftward parts of the full outline as the author intended when in flow with the work and any further details as needed) vs light engagement (reading a summary which I don't think works for long pieces like this), not mandating very-deep engagement (reading the piece in full detail). So I think most people can do it.]
That said, I appreciate your sentiment, and I think a table of contents and better section titles would be extremely helpful for easier semi-deep engagement. Also, using numbered outline instead of bullet points. I think these are also easier asks less likely to get future posts hung up in procrastination-land.
Very interesting, thanks for sharing Nathan, and to Owen for writing.
Minor question, maybe I just missed it: why is '
Exceptionalism' written in strike-through every time?I interpreted that as "against exceptionalism" or "place lower credence in EA exceptionalism", but I'm not sure.
That was my guess, but other phrases like Uncompromising utilitarianism do not get the same treatment. Maybe exceptionalism is a theory about the world while the others are policies that can be adopted or rejected?
Even if it was not possible to allow applications this time, I think it would be useful for people know that events like this are happening, and ideally some key takeaways after. But the link is not active.
Here's a post from a few months ago where they announced the event. (Maybe this is what Owen wanted to link to.)
Thanks!