Some quick notes 

  • Owen Cotton Barratt has written a post of reflections about FTX
  • There is a discussion to be had about whether Owen’s content should be on the forum or indeed further discussion of the whole situation, feel free to have that discussion here. The mod team has suggested (and I cautiously endorse) having a dedicated comment thread on this post for meta-discussion about Owen, details below.
    • I think this could be seen as soft rehabilitation. I don’t endorse that
  • As elsewhere I think it may be helpful to split up thoughts and feelings, personally I think my feelings do not automatically translate into a need for action. Feelings are important, but they are only part of good arguments.
  • Edited to make these comments shorter (I thought this would be more controversial than it seemingly)

[A note from the moderation team] We realize that some people might want to discuss how to process this post in light of Owen's recent statement and apology. But we also want to give space to object-level discussion of the contents of the post, and separate those out somewhat. So we ask that you avoid commenting on Owen's recent apology anywhere but in this thread. New top-level comments (and responses to them) should focus on the contents of the post; if they don't, we'll move them to said thread.



 

Comments18
Sorted by Click to highlight new comments since:

Here’s just the headings from the updates + implications sections, lightly reformatted. I don’t necessarily agree with all/any of it (same goes for my employer).

Updates

Factual updates (the world is now different, so the best actions are different)

  • Less money — There is significantly less money available
  • Brand — EA/longtermism has a lot more media attention, and will have a serious stain on its reputation (regardless of how well deserved you think that is)
  • Distrust — My prediction is that if we polled the EA community, we’d find EAs have less trust in several institutions and individuals in this community than they did before November. I think this is epistemically correct: people should have less trust in several of the core institutions in the community (in integrity; in motives; in decision-making)

Epistemic updates (beliefs about the world I wish I’d had all along, that I discovered in processing this evidence)

  • Non-exceptionalism — Seems less likely that a competent group of EAs could expect to do well in arbitrary industries / seems like making money is generally harder (which means the estimate of future funding streams goes down beyond the immediate cut in funding)
  • Dangerous ideas — We should be more worried that aspects of our memeplex systematically increase the risk of people taking extreme actions that are harmful
  • By the book — The robustness that comes from doing things by the book seems more important
  • Uncompromising utilitarianism — We should be more worried about people orienting to utilitarian arguments in absolutist ways that don’t admit other heuristics
  • Tribalism — I’m more worried that people identifying as EAs is net destructive
  • Conflicts — I’ve moved towards thinking conflicts of interest, broadly understood, are frequent and really guide people’s thinking
  • Integrity — I think that upholding consistently high standards of integrity is particularly important
  • Taking responsibility — Diffusion of responsibility for cross-cutting issues for the EA community can mean nobody works on them
  • Complicity — Tacit tolerance of bad behaviour is a serious issue

Implications

Implications for object level work:

  • We should be a bit more positive on people doing crucial work within established institutions
  • We should have a somewhat higher bar for funding things
  • We should consider lower salaries
  • We should care a bit more that plans look robustly good
  • We should be a bit more positive on research distillation

Implications for community-building activities:

  • Content (reading lists, talks, etc.) should:
    • Bit more positive on content from outside EA
    • Bit more tools-driven, and a bit less answers-driven
    • Bit more emphasis on the value of looking at things from several perspectives
    • Focus a bit more on social epistemology
  • The vibe of community-building activities should:
    • Lean a bit further away from encouraging people to identify as EA
    • Lean a bit further away from “we have the answers” and towards “we’re giving you the questions”
    • Send somewhat fewer in-group signals
    • Focus on building a culture which is high-integrity
    • Focus on building a culture which treats consequentialist analysis as just one tool in the toolkit
    • Focus on building a culture which asks people to make sure they know who has responsibility for things
  • Structurally, community-building activities should:
    • Put somewhat lower estimates on the monetary value of outcomes or programs
    • Be more transparent about these valuations and other tools for decision-making about community building
    • Scale down activities a little (or slow the growth trajectory)
    • Scale down salaries a bit

Implications for central community coordination:

  • We should lean a bit further towards professionalism
  • We should lean a bit further towards transparency
  • We should consider creating mechanisms for anonymously sharing updates/impressions
  • Orgs should be very explicit about what they are and aren’t taking responsibility for
  • Coordination mechanisms should facilitate making sure someone is taking responsibility for important things
  • We should ensure that people can access some core discussions by application, not just by networking
  • We should lean a bit more towards legible invite criteria, especially for flagship events like Coordination Forum
  • We should lean a bit further towards frugality

Implications for governance:

  • We should increase oversight of projects and decisions
  • We should increase transparency of governance
  • We should err towards doing more impact analyses
  • Projects and orgs should invite accountability primarily for whether they took responsibility for the right things, and how those things went
  • We should give less weight to straightforward consequentialist PR arguments 
  • We should spread governance work over more people
[anonymous]29
21
4

This must be one of the best FTX reflections I've heard for avoiding the all-too-tempting 'us vs them' / 'goodies vs baddies' mindset.

And I think the assumption that 'almost no one is evil; almost everything is broken' generally leads to both much more accurate takes and much more openness to cooperation moving forward.

“We should do things by the book”: Agree 6/10 → 9/10

This feels like a confusing update, largely due to "by the book" being a vague phrase.

I agree that FTX's "move fast and break things" culture seems to be upstream of some of their problems, and this is a point in favor of, for example, separating the bank accounts of legally distinct entities (which is a "by the book" practice).

But if I imagine a world in which EA organizations caught/prevented the FTX issues, I think this would have required a bunch of non-"by the book" work. I think very few nonprofits would do the kind of due diligence that would have been required to uncover this level of fraud in one of their donors, and if EA's had been compelled to do so it would have been through arguments like "'by the book' behavior underrates the severity of tail risks so therefore you should do this weird stuff despite it being costly and different from 'the book.'" So I think you could make the opposite update, for a reasonable definition of "do things by the book".

One concrete example from your document: you say that an implication of the update that we should do more things "by the book" is that we should do more impact evaluations. I think impact evaluations are great, but I'm confused what "book" you are referring to which states that nonprofits should do more impact evaluation. Normal nonprofits do ~0 impact eval. Arguably all of EA is a reaction to the fact that normal nonprofits do ~0 impact eval. If you think we should do more impact evals then this does not seem to be justifiable by an appeal to what normal/best practice/"by the book" nonprofits do.

Thanks for writing and sharing this. I feel confused about the scale on some of these. For example:

it now somewhat less appears that a competent group of EAs could expect to do well in arbitrary industries... “How exceptional are EAs, compared to other social groups?”: 5/10 → 3/10

Pre-FTX collapse, an EA was the richest self-made person in the world under 30 after having switched to an industry they had no experience in (crypto), and this is evaluated as 5/10 ability to do well in arbitrary industries? What is the scale here? What are social groups that are 10/10? Is the claim that Sam was 10/10 but the rest of us were 1/10 so it averages out to 5/10? But weren't something like >0.1% of EAs billionaires at that point? Surely that's at like the 99th percentile of billionaire density for social groups? 

And we are currently 3/10? The fact that most EA's have the right to work in an OECD country seems like it automatically should put us at at least the 80th percentile or something?

Maybe this is pedantic, but this lack of precision on the scale makes it hard for me to interpret the strength of these updates. E.g. if Owen is making the claim that EA's have below-average ability amongst humans to accomplish arbitrary business goals (which is my naïve interpretation of the 3/10 rating) then this seems like a really strong and surprising claim.

(Note: I'm more sympathetic to the claim that EAs do things that are impactful in absolute value but have a negative sign, so the overall net impact is lower. But I understand Owen to be talking about a different thing here.)

I read the endpoints as 0 is no extraordinary abilities at all (i.e., similiar to most groups with similiar characteristics like education and geography) and 10 as a group of Nobel prize winners or something.

JP Addison🔸
Moderator Comment11
8
6

Hey everyone, we (the moderators) realize that seeing this post on the Forum may cause some negative emotions and that it might be worth discussing how to process this post in light of his recent apology. But we also want to give space to object-level discussion of the contents of the post, and separate those out somewhat. So we ask that you avoid commenting on Owen's recent apology anywhere but in this thread. New top-level comments (and responses to them) should focus on the contents of the post; if they don't, we'll move them to this thread.

We're not doing this to prevent any specific conversations from happening, but rather because we hope that separating the conversations out will make them go better.

We've never tried this setup before but might try it again for important (and difficult) conversations depending on how it goes here.

Nathan (or anyone else) I doubt I will read the whole google doc but would appreciate a ~1 page summary, and I think that would be appropriate to include in this post (with, of course, a disclaimer that it is your gloss on what Owen is saying)

Here's my attempt (just copy pasting the bolded lines, as Ivy suggests).

Nice job, I was super happy to notice that :) :thumb:

As someone who read the whole piece, I think you could just read the bolded lines and read the explanatory bits below for those lines you find interesting/key. It's also already an outline, so you could just read the bullets further to the left, and read the further-right bits as your curiosity and ethical compass direct you. Reading the left-ward bits can always be assumed to function as a summary of an outline (and author's fault if it doesn't). 

[EDIT: This is what Angelina Li did above, nice :) Hopefully if anyone finds any bit intriguing, they go read more in the source :)

The rest is me reflecting on EAs and appropriateness of summaries vs different types of info-gleaning]

I'm not confident that summarizing pieces like this for an EA audience [like, typical summary paragraph-style] really works tbh. Different EAs will need very different things from it. Eg, community builders will be way more interested in the CB section and want to read it in detail even if they disagree, so as to understand the modes of thinking that others might adopt and what they might want to refute or adapt to. 

This is also, after all, just someone's personal reflections and won't necessarily be the way EAs move forward on any of these things. And for reflections, summaries often cut reasoning and therefore lead to information cascades that need to be addressed later, I think. We already have way too much deference and information cascades in EA anyway, so I'd rather see people lean more toward engaging with material semi-deeply that is relevant to them or not repeat ideas at all tbh. This leads me to say that each reader should be proactive [by reading the bolded/leftward parts of the the outline themselves], and try to sort out the bits they care about or want to improve their thinking on and read anything further on that carefully. 

It's totally okay to say "this isn't really my bag, and I trust others to get it right eventually, so I'm not gonna engage with this". And if you don't trust others to get it right eventually (and the FTX debacle is certainly around a low-trust theme), I still think EAs should engage semi-deeply (enough to evaluate trust in others or actually do the better job yourself) or hardly at all (even if this means pulling back from EA til you have the spoons to check-in deeply on your concerns), because engaging lightly will probably only waste your time, confuse discussion, and waste the time of others if they retroactively have to correct misunderstandings that spread thanks to poor-quality/surface-level engagement. [I've gone on a long time which makes it sound like a big ask, but honestly I am just talking about semi-deep engagement (eg, reading the leftward parts of the full outline as the author intended when in flow with the work and any further details as needed) vs light engagement (reading a summary which I don't think works for long pieces like this), not mandating very-deep engagement (reading the piece in full detail). So I think most people can do it.]

That said, I appreciate your sentiment, and I think a table of contents and better section titles would be extremely helpful for easier semi-deep engagement. Also, using numbered outline instead of bullet points. I think these are also easier asks less likely to get future posts hung up in procrastination-land.

[comment deleted]1
0
0

Very interesting, thanks for sharing Nathan, and to Owen for writing.

Minor question, maybe I just missed it: why is 'Exceptionalism ' written in strike-through every time?

I interpreted that as "against exceptionalism" or "place lower credence in EA exceptionalism", but I'm not sure.

That was my guess, but other phrases like Uncompromising utilitarianism do not get the same treatment. Maybe exceptionalism is a theory about the world while the others are policies that can be adopted or rejected?

 

“How much do we need robust application-based mechanisms to access core discussions?”: 5/10 → 8/10...

For a few months I was collaborating with the CEA Events team to plan the Summit on Existential Security [LINK], which is in the broad category referred to here (more recently I stepped back from that). Since plans were already significantly in motion before the FTX collapse, some of the updates weren’t very actionable this time around (e.g. the venue had been paid for, so there’s relatively little to do from the frugality update; and we’d largely settled on an invite process).

Even if it was not possible to allow applications this time, I think it would be useful for people know that events like this are happening, and ideally some key takeaways after. But the link is not active.

[comment deleted]2
0
0
Curated and popular this week
Relevant opportunities