Hide table of contents

(Status: not draft amnesty, but posting in that spirit, since it's not as good as I'd want it to be but otherwise I probably won't ever post it)

In my experience, EA so far has been a high-trust community. That is, people generally trust other people to behave well and in accordance with the values of the community. 

Being high-trust is great! It means that you can spend more time getting on with stuff and less time carefully checking each other for bad behaviour. It's also just nicer: It feels good and motivating to be trusted, and it is reassuring to support people you trust to do work.

I feel like a lot of posts I've seen recently have been arguing for the community to move to a low-trust regime, particularly with respect to EA organizations. That includes calls for:

  • More transparency ("we need to rigorously scrutinise even your small actions in case you're trying to sneak bad behaviour past us")
  • More elaborate governance ("there is a risk of governance capture and we need to seriously guard against it", "we don't trust the people currently doing governance")

Sometimes you have to move to low-trust regimes. It's common that organizations tend to move from high-trust to low-trust as they grow, due to the larger number of actors involved who can't all be assumed to be trustworthy. But I do not think that the EA community actually has the problems that require low-trust, and I think it would be very costly.

Specifically, I want to argue:

  1. Low-trust regimes are expensive, both in terms of resources and morale
  2. The people working in current EA orgs are in fact very trustworthy
  3. The EA community should remain high-trust (with checking)

Low-trust is costly

Low-trust regimes impose costs in at least three ways:

  1. Costlier cooperation
  2. Costlier delegation
  3. General efficiency taxes

The post Bad Omens in current EA Governance argues that due to the possibility of conflicts of interest we should break up the organisations which currently share ops support through EVF. This is a clear example of 1: if we can't trust people then we can't just share our resources, we have to keep everyone at arm's length. You can read in the comments various people explaining why this would be quite expensive.

Similarly, you can't just delegate power to people in a low-trust regime. What if they abuse it? Better to require explicit approval up the chain before they do anything serious like spend some money. But if you can't spend money you often can't do things, and activity ends up being blocked on approval, politics, and perception.

When you actually try to get anything done, low-trust regimes typically require lots of paper trails and approvals. Anyone who's worked in a larger organization can testify to how demoralizing and slow this can be. Since any decision can be questioned after the fact, there is no limit to how much "transparency" can be demanded, and how many pointless forms, proposals, reports, or forum posts can end up being produced. I think it is very easy to underestimate how destructive this can be to productivity.

Finally, it is plain demoralizing to be in a low-trust regime. High-trust says "Yes, we are on the same team, go and attack the problem with my blessing!". Low-trust says "I guess I have to work with you but I'm expecting you to try and steal from me as soon as you have the opportunity, so I'm keeping an eye on you". Where would you rather work?

Current people in EA organisations are trustworthy

(Disclaimer: I know quite a lot of people who work in EA organisations, so I'm definitely personally biased towards them.)

The FTX debacle has led to a lot of finger-pointing in recent months. A particular  pattern has been posts listing large numbers of questions about the behaviour of particular organizations or individuals over the last few years. These often feel accusatory all by themselves: look at this big list of suspicious behaviour, surely something shady is going on! But it seems to me that in every instance that I've seen there has either been a good explanation or the failing has been at worst a) bad decisions made for good reasons, b) lapses in personal judgement, or c) genuine disagreements about which actions are worth doing.

Crucially, none of a), b) or c) are in my opinion things that justify a switch to low-trust. They suggest that we have normal, fallible people who are acting in good faith and doing their best. That's really the best that we can hope for! Low-trust measures won't help with any of this.

If anything, this argues that we could be higher-trust. Expending a lot of energy hunting for bad behaviour and not finding much is evidence that people are more trustworthy, not less!

Here are some examples. I've not attempted to be comprehensive, these are the ones that came to mind when I was writing this post. I'd be interested in examples that people think are neither explained nor at worst a), b) or c). I'm also including people saying things that turned out to be factually wrong, as these are examples of looking for bad behaviour and not finding it.

Trust but verify

I want the EA community to remain high-trust. It's part of what makes us effective and I don't think we're justified in throwing it away now (if ever). Calls to make it low-trust make me feel sad and less like it's a community I want to be in. I think we should just decide to Not Do That.

There are some cheap things we can do that don't damage trust too much. For example, checking that people have behaved trustworthily from time to time is a good idea ("trust but verify" is a good motto).

Overall I have a few concrete suggestions for making things better:

  • Make sure you're updating your beliefs about the trustworthiness of people based on the results of checking, not the fact that the checking is happening.
  • If you agree with "trust but verify", make the background level of trust clear when you're proposing to do a bunch of aggressive verification, e.g. "I don't have any particular reason to expect bad behaviour here, but in the spirit of 'trust but verify' I would like to ask the following questions..."
  • If you're proposing a change in behaviour, seriously consider the costs, which includes making it specific enough that the costs are clear.

155

0
0

Reactions

0
0

More posts like this

Comments45
Sorted by Click to highlight new comments since: Today at 10:23 PM

I think it's not quite right that low trust is costlier than high trust. Low trust is costly when things are going well. There's kind of a slow burn of additional cost.

But high trust is very costly when bad actors, corruption or mistakes arise that a low trust community would have preempted. So the cost is lumpier, cheap in the good times and expensive in the bad.

(I read fairly quickly so may have missed where you clarified this.)

To re-frame this:

  • best: high-trust / good actors
  • good: low-trust / good actors
  • manageable: low-trust / bad actors
  • devastating: high-trust / bad actors

High-trust assumes both good motivations and competence. High trust is nice because it makes things go smoother. But if there are any badly motivated or incompetent actors, insisting on high trust creates conditions for repeated devastating impacts. To further insist on high trust after significant shocks means people who no longer trust good motivations and competence leave.

FTX was a high-trust/bad actor shock event. The movement probably needs to operate for a bit in a low-trust environment to earn back the conditions that allow high-trust. Or, the movement can insist on high-trust at the expense of losing members who no longer feel comfortable or safe trusting others completely.

FTX was a high-trust/bad actor shock event. The movement probably needs to operate for a bit in a low-trust environment to earn back the conditions that allow high-trust. Or, the movement can insist on high-trust at the expense of losing members who no longer feel comfortable or safe trusting others completely.

I was arguing to the contrary: the inquiries post-FTX have shown very little untrustworthy behaviour in the community as a whole. So if anything we should regard this as a validation of our trust. 

Certainly I wouldn't trust FTX after this, but I think extending the reduced trust to the rest of the community in any significant way is mistaken.

Perhaps you're worried about more things like FTX happening in the future. To that I would just accept the risk - being high-trust means you can get suckered. If it doesn't happen too often, maybe it's worth the cost. And right after the event is the classic time for people to highly overestimate how likely such events will be (or how likely we should have considered them to be in the past).

The key actors involved in FTX were extremely close to the EA community. SBF became involved after a 1:1? conversation with Will MacAskill, worked at CEA for a short while, held prime speaking slots at EAG, and set up and funded a key organization (FTX fund). Caroline held an officer position in her university EA group. It's fair to say the people at the center of the fraud were embedded and more tightly aligned with the EA movement than most people connected with EA. It's a classic example of high-trust / bad actors - it only takes a few of them to cause serious damage.

Is this just a black swan event? Perhaps. Are there more bad actors in the EA community? Perhaps.

You are certainly welcome to keeping treating EA as high-trust community, but others have good reason not to.

SBF became involved after a 1:1? conversation with Will MacAskill, worked at CEA for a short while, held prime speaking slots at EAG, and set up and funded a key organization (FTX fund).

Only the last point seems concerning to me because Sam was working closely together with figures very central within EA at a time when some of the red flags should already have been visible. By contrast, I think it's unreasonable to hold anyone responsible for another person's negative impact if you motivate them to do something in a 1-on-1 conversation or if some "bad actor" briefly worked at [central EA organization]. We can't be responsible for the behavior of everyone we ever interact with! It's not always possible to vet people's character in a single conversation or  even during a brief period of working together. (And sometimes people get more corrupted over time – though I'd expect there to be early warning signs pretty much always.) I think the EA community made big mistakes with respect to FTX, but that's precisely because many EAs have interacted with Sam over many years and worked closely together with him before the collapse. 

 

"[T]he inquiries post-FTX" have been largely deflected due to legal concerns. I'm not going to second-guess the presumed advice of the relevant lawyers here, but that's hardly the same as there having been a public and independent investigation that reveals the potential concerns  to have been unfounded. Given the absence of any real information (other than knowing that someone senior heard that SBF was under criminal investigation, shared that report, and got muted responses), the range of plausible narratives in my view ranges from no errors to ordinary errors of judgment to severe errors of judgment (which would provide significant reason to believe that the relevant actors' judgment is "untrustworthy" in the sense of not being reliable) to willful blindness.

The really tricky issue is something like second impact syndrome, where death or severe injury occurs because the head was hit a second time before healing from the first.

So I would be a little more careful for a few years for EA.

I guess I find your proposal that EA operate in low trust mode for a bit to win back the conditions that allow high trust confusing because I would expect that shifting to low trust mode than shifting back to high trust would be very hard, as in almost never the kind of thing that happens.

EA started in low-trust mode (e.g. iirc early GiveWell was even suspicious of the notion of QALYs which is why they came up with their own metrics) and gradually shifted towards higher trust. So it seems plausible to me that we can go back to low-trust mode and slowly win back trust, though maybe this will take too long and EA will die or fade out to irrelevancy before then. 

This is quite interesting and reminds me of a short option position as a previous hedge fund manager - you earn time decay or option premium when things are going well or stable, and then once in a while you take a big hit (and a lot of of people/orgs do not survive the hit). This is not a strategy I follow from a risk adjusted return point of view on a longer term perspective. I would not like to be short put option but rather be long call option and try to minimise my time decay or option premium.  The latter is more work and time consuming but I have managed to construct very large option structured positions with almost no time decay as a hedge fund manager. In EA terms some of the ways I would like to structure long call options on EA whilst minimising risks would be look for strong founders and team, neglected with large convex upside, tractable and cost effective even with base case delivery (GWWC, Founders Pledge and Longview were good examples of this), and continue to fund promising ones until other funders come in.

As a general observation I think EA overemphasise expected return and not enough on risk adjusted return, especially when in some cases some sensible risk management can reduce risk a lot without reducing expected return much. (eg ensuring we have experienced operational, legal, regulatory and risk management expertises). This may have something to do with our very long impact time horizons and EAs preference to work things out from base.

I also like to emphasise that it does not always have to be bad actors, but could also be people acting outside their level of expertise and/or competence in good faith. And trust perhaps like market cycles can be oversupplied at times and in certain areas and under supplied at other times and areas.

I think of it in terms of "Justified trust". What we want is a high degree of justified trust. 

If a group shouldn't be trusted, but is, then that would be unjustified trust.

We want to maximize justified trust and minimize unjustified trust.

If trust isn't justified, then you would want corresponding levels of trust.

Generally, [unjustified trust] < [low-trust, not justified] < [justified trust]

No, I didn't talk about this. I agree that you can frame low-trust as a trade where you exchange lower catastrophe risk for higher ongoing costs.

Through that lens a decent summary of my argument is:

  • People underestimate the ongoing costs of low-trust
  • We are not actually at much risk of catastrophe from being high-trust
  • Therefore we should stick with high trust

Yep, I agree. I see it as high trust > low trust, but being mislead about which one you are in isn't a good move.

I think it's more clear as a two-by-two matrix, with trustworthiness vs trust. In rough order of goodness:

  • High Trust, High Trustworthiness: Marriages, friends, some religions, small teams. Very good.
  • Low Trust, High Trustworthiness: The experience of many good people in normal countries, large corporations, etc. Wasteful, frustrating, and maybe the best people will leave.
  • Low Trust, Low Trustworthiness: Much of the world is like this. Not ideal at all but we have systems to minimise the damage.
  • High Trust, Low Trustworthiness: Often caused by new entrants exploiting a poorly protected community. Get scammed or robbed.

And so I claim: we have High Trustworthiness, so moving down the column to Low Trust is just shooting ourselves in the foot.

'Trust' can mean a few different things. Here it's used like 'trust someone has good intentions'. But it could also mean 'trust someone's judgement'.

Lack of the second kind of trust in EA leadership could make someone in favour of more/broader governence and transparency, even if they have the first kind of trust.

I think this is an important distinction.

People can inadvertently do bad things with very good intentions due to poor judgement, there is even the proverb 'the road to hell is paved in good intentions'.

EA emphasises doing good with evidence, with reasoning transparency being considered highly important. People are fallible, and in the case of EA often young and of similar backgrounds, and particularly given the potential consequences (working on the world's biggest issues including x risks) big decisions should be open to scrutiny. And I think it is a good idea to look at what other companies are doing and taking the best bits from the expertise of others.

For example, the Wytham Abbey purchase may (I haven't seen any numbers myself) make sense from a cost effectiveness perspective, but it really should have been expected that people would ask questions given how grand the venue seems. I think the communication (and at least a basic public cost effectiveness analysis) should have been done more proactively.

I agree. To take the distinctions of trust one step further - there's a difference between trust in  the intentions and judgements of people, and trust in the systems they operate in.

Like, I think you could be trusting of the intentions and judgement of EA leadership, but still recognise that people are human, and humans make mistakes, and that transparency and more open governance leads to more voices being heard in decision making processes, which leads to better decisions. It's the 'Wisdom of Crowds' kind of argument.

transparency and more open governance leads to more voices being heard in decision making processes, which leads to better decisions

Perhaps I'm just a die-hard technocrat, but I'm very unconvinced that this is actually true. Do we have any good examples either way?

Agreed, particularly as bad bureaucracy could have bad results even if everyone has good intention and good judgement. For example, if someone makes the best decision possible given the information they have available, but it has unintended negative consequences as due to the way the organisation/system was set up they are missing key information which would have led to a different conclusion.

I think this is a key point of disagreement. Most of the proposed governance changes seem to me like they would have some protective effect against bad actors, but very little effect on promoting good decision-making. I'd be much more on board if people had proposals that I actually thought would help leadership make better decisions, but I don't think most of the transparency-oriented proposals would actually do that.

(I think actually justifying this statement would be a whole other post so I'll just state it as an opinion.)

The way I think EA orgs can improve decision-making is by introducing some kind of meaningful competition. At the moment, the umbrella structure, multi-project orgs, and lack of transparency makes that all but impossible. 

Split that into a cluster including an org that, say, just runs events and isn't officially supported by EVF and you have a level playing field and a useful comparison with another org that also started to run events - while having a big enough space of event types that both could semi-cooperate. 

If both orgs are also transparent enough that substantial discrepancies between how well they operate are visible, then you have a real possibility of funders reacting to such discrepancies in a way that incentivises the orgs and their staff to perform well. At the moment I just don't feel like these incentives exist.

Thank you for this post, Michael.

I heard someone say at an EA event in April, "We are probably the largest high-trust community - other than maybe some religious groups - in the world." I've since heard someone else suggest the military as another example, but I think the fact that I'm struggling to think of other examples tells me that we have/had something very special.

Are orgs / movements like Doctors without Borders, Greenpeace, Peace Corps , Rotary etc considered relatively high trust too? 

Thanks for the question - it's made me realise I should clarify: I didn't mean I'm struggling to think of examples of other large, high-trust communities. I meant I'm struggling to think of examples of other comparably large and comparably high-trust communities.

But yes, I imagine there are in fact several examples I'm not familiar enough with or haven't thought of, but the levels of trust I've experienced in this community still strike me as weirdly high for a philosophy that can't rely on the historically successful fear of deities or hatred of out-groups[1] to motivate a common purpose. I'm with Michael on thinking that trust has been broadly justified wrt current people in EA orgs (I wouldn't expect or want you to take my word for it but I feel it's worth stating), so I think this is something to be proud of.

[1]A big over-simplification, but I think these are at least very common powerful contributors to religious/military engagement that aren't widely considered a core part of using evidence and reason to better love all sentient beings.

[anonymous]1y27
8
1

Hi Michael,

I appreciate the post, particularly because I have been slowly updating towards the low-trust side of things. I have a few points I'd like to raise.

  1. I think that an important update is that "EA leadership" is fallible and I think more people (especially new and/or younger EAs) should not (entirely) defer. You may not think "not deferring to EA leadership" does not count as "trusting EA less", but I do. For me, in the past, highly trusting EA leadership has meant largely trusting their cause and career prioritization without digging too much into it.
  2. I think that not trusting EA leadership/not deferring to EA leadership is especially true on career choice. 80k seems to add new carer paths to its list of recommended career paths every few months (I don't have a source - this is just based on my subjective observations). This doesn't discount them entirely, but it does mean that even if 80k doesn't currently  recommend a career path, it may be very impactful for an individual to pursue that path (especially if they're well suited for that path and have good reasons to think it may be impactful).
  3. Besides not deferring to EA leadership, there are certain things many EAs do because of (what I would label as) high trust in other EAs that I think we should probably do less of. I think this includes:
    1. Romantic/sexual relationships in certain cases. This includes boss/employee, funders/fundees, and probably other cases.
    2. Limited accountability into whether grantees used their money effectively.
    3. Limited investigation into whether community building efforts are effective. 
    4. Not stating that certain grantees (including individuals or organizations) should not receive additional funding. (As in, I think we should increasingly publicly "name names" and state that certain people/orgs are being ineffective.)
      1. I feel like this fundamentally requires less trust than what the EA community currently has and will also, ultimately, reduce trust a little. But I think it's necessary.

If your point is that we shouldn't defer to people's opinions without fully understanding their arguments and we should verify people are doing good/effective work but we should believe they're acting in good faith... I think I maybe agree? I'm still not sure though what "believing people are acting in good faith" does and doesn't include. 

Well, this is what you get for using vague words like "trust" :)

I didn't mean to talk abou "epistemic trust"/deference in the post. I don't think that people should defer to "leadership" much at all (maybe a bit more to "experts", who are not the same people).

That is very different to trusting them to behave well, represent our movement, and take decisions that are reasonable and that most of us would approve of if given full context. That's what I'm talking about, and what I think has been under threat recently.

I'm not saying deference isn't a problem, just not the one I was talking about.

[anonymous]1y3
1
1

Hi Michael,

Thank you for the clarification!

But it seems to me that in every instance that I've seen there has either been a good explanation or the failing has been at worst a) bad decisions made for good reasons, b) lapses in personal judgement, or c) genuine disagreements about which actions are worth doing.

I think you make a good point that many things are (a) or (b), which are relatively fine. And I believe (and maybe we agree) that EAs should still verify these things in sketchy looking situations (including the purchase of Wyntham Abbey). 

But in the case of "c) genuine disagreements about which actions are worth doing", it's possible we disagree. I feel like definitionally this means we don't believe other EAs are behaving well or representing our movement. In other words, "genuine disagreement about which actions are worth doing" sometimes is good cause to trust other people less.

I think you have valid reason to "distrust" EAs if you strongly disagree with the reasoning for  the purchase of Wytham Abbey or for investing a lot in community building or for promoting longtermism. I strongly disagree with flat-earthers, and I would not "trust" a community based on evidence/reasoning  that has a lot of flat-earthers.

I think at the end of the day, this discussion depends on your definition of "trust". It probably comes down to vibes. And it sounds like you're saying "even if you strongly disagree with someone, keep the positive vibes", and what I'm saying is, "sometimes it's okay to have negative vibes when you strongly disagree with someone."

It's important not to conflate two uses of the word "trust" -- you can trust someone's judgment, trust them not to be malicious, neither, or both. "Bad decisions" and "lapses in personal judgement" are perfectly consistent, if not potentially supportive, of not trusting someone's judgment.

It's tough to be"high-trust" at the top levels of a movement without also being "low-accountability." Low accountability has its harms too, especially when it comes to relations with non-EAs (or those who are not yet EAs) who are outside of the high-trust community. For those people, you have to show, not only tell, that you are trustworthy and reliable. The significance of this cost depends on what you think EA should be and will be in the future. To me, saying that FTX was n=1 goes both ways; you can't count on major funders who become EAs, accept the community of trust, and then get really wealthy. So being seen as at least moderately trustworthy in the public eye is important; that requires things like proactively providing an explanation for why having a significant portion of your organization's assets tied up in a $15MM manor house is a good idea.

My own view is that most of the calls for greater transparency etc. have been directed at major decisions of significant organizations and/or methods that are relatively low cost. In addition, some of it has called for a wider circle of transparency rather than more work per se. Returning to Wytham Abbey, EVF clearly had to justify the decision to Open Phil to get the $15MM, so the costs of proactively releasing the business case for that expenditure should have been very low cost. I think some of the negative reactions are to the idea that organizations need to be accountable/responsible to big donors but not smaller ones & rank-and-file EAs.  That this reduces morale and makes people feel like they have less of a voice is unsurprising to me.

I think that's a good analysis, and I think we should strive to be a high-trust community.

But: you can't just tell people to shut up and simply trust. Trust needs to be earned. One effect of democratic processes is that candidates demonstrate their trustworthiness, so after the democratic process, people can trust them.

Right now, you would ask any new EA community member (and most EAs are new) to "just trust other EAs", on no other basis than "past EAs have been trustworthy".

I can imagine a few things that important figures can do in order to increase their trustworthiness:

  • Listen to and engage with the EA community.
  • Be transparent about their decisions and strategy.
  • Not being afraid to limit their own power by introducing oversight.

I can also imagine a few things that community members can do to improve their trustworthiness:

  • be vouched for by other trustworthy people
  • repeatedly demonstrate good intentions

As it stands now, I don't trust the EA community to indentify non-trustworthy EAs to me. In a high-trust culture, any abuse of trust should be costly. I have not yet seen an example of this.

I agree with much of this, however, I also don't think we should go around asserting "past EAs have been untrustworthy" based on little evidence nor fact-checking. This does a lot of damage not just to the reputation  of the individuals and organisations (which is important for their impact) but to the high-trust environment that we have right now (which is also important for our community’s impact). We largely have this high-trust environment because it's earned (the reason I trust so many people and organisations is that they've proven to be trustworthy time and time again). Yes, some trust has absolutely been let down recently (eg SBF), and we need to learn from that, but we need to learn the right lessons not the wrong ones.

A lot of the valid points you had within the The Bad Omens in EA Governance post you wrote were undermined by (or at least buried by) its accusatory superlative language,  lack of fact-checking and misleading statements (e.g. "EVF paid itself" when talking about an EA Funds grant to GovAI which is instead better understood as EA Funds grant managers awarded a grant to Allan Dafoe and Ben Garfinkel to set up GovAI and EVF were approached to fiscally sponsor it) . Many innacurate and misleading statements (and lots of superlative accusatory language) are still present in the post despite the feedback and this contributes to an unjustified low-trust environment. 

I strongly encourage you to continue to push directionally for good governance and legible trustworthiness  as I think it’s something worth pursuing. I do however worry about the approach you’ve taken. 

Your previous post demonstrated much stronger reasons to not trust you than those you accused of being untrustworthy. However, I’m trying (and failing to some extent as I’m human) to take on board the points you make and to be charitable while also pushing back appropriately. This is the approach I appreciate from others in the community who I trust and respect. I hope that you are able to take this feedback on board and try to hold yourself to the standards you seem to hold of others, that in turn would help me trust future critiques you make.

I just want to say that I have been very impressed by your response to my post! I agree, I should have done more fact checking before posting and I should have used more charitable language. This has shaped the debate in a combattive way I didn't intend. I already learned from this and will hold myself to a higher standard in future interactions with the community!

Thanks for holding me to a high standard in return, you have been nothing but nice. This has increased my trust in you personally!

Thanks very much for writing this - it really made my day to read it 😀

Hope you have a wonderful rest of 2022!

Your previous post demonstrated much stronger reasons to not trust you than those you accused of being untrustworthy. 

... strikes me as "not nice" fwiw, though overall it's been cool to see how you've both engaged with this conversation.

[anonymous]1y2
0
0

Do you feel comfortable sharing specific examples of what EA orgs and people have done to earn your high trust? 

What specific examples of abuse of trust haven’t been made to be costly do you think should be made to be costly? Why those ones? And how specifically should they be identified or made to be costly? Do you have examples from other communities who’ve done this well in a way that has improved trust and ability to have impact?

I have not come across serious abuses of trust, which is surprising even in a high-trustworthiness environment.

For example, actual Fraud can't be at 0% when there's so much money around.

The only example I can think of, SBF, seems thoroughly extradited from EA.

Nice post! One question: what are some things that could happen that you would view as requiring a move to low-trust?

Great question!

First of all, I painted it as very binary in the post, but obviously it's really a sliding scale. But still, things that would warrant a significant shift towards low-trust in my opinion:

  • Significant un-corrected corruption, e.g. major grants given to clearly worse projects without plausible justification.
  • Abuse of power, e.g. EVF forcing  sub-orgs to take positions they disagreed with under legal threat.
  • Widespread behaviour out of line with our values, e.g. lying to people systematically

To be honest, even then I would probably just consider the specific organisation where this happened to be suspect. We can recover from that: disband or de-emphasize the org, try again. It would take quite a lot of this for me to stop trusting the community in general.

Thank Michael, I connect with the hope of this post a lot and EA still feels unusually high-trust to me.

But I suspect that a lot of my trust comes via personal interactions  in some form. And it's unclear to me how much of the  high level of trust  in the EA community in general  is due to private connections vs. public signs of trustworthiness.

If it's mostly the former, then I'd be more concerned. Friendship-based trust isn't particularly scalable, and reliance on it seems likely to maintain diversity issues. The EA community will need to increasingly pay bureaucratic costs to keep trust high as it grows further.

I'd be interested in any attempts at quantifying the costs to orgs of different governance interventions and their impact on trust/trustworthiness.

To add another datapoint, a lot of my trust comes from :

  • Stories from more engaged EAs about difficult trade-offs or considerations made my EA leaders / orgs (which were not available publicly - often personal interactions, semi-private facebook discussions, or just observations of behavior over time which are hard to absorb coming in new to a community)
    • This is also not scalable, not accessible to people without networks, and not very reliable (e.g. what if my friend misremembered some facts over the years? or had gaps in their knowledge?)
  • Personal experiences with mostly people i consider friends, but also people i have worked with and developed professional working relationships with (many friendships have come out of working relationships) 

A few disjointed challenges, basically:

  1. Governance has value regardless of degree of trust
  2. I find EA leadership's attitude low trust in some instances

There's a false dichotomy implied in this post; that high trust and less governance go together, and having strong governance means you have low trust. I don't think this is the case; in fact you often create high trust by having appropriate strong governance, or lack of governance destroying trust.

I also find the characterisation of governance being there to protect against bad actors narrow, even naive; both in this post and in comments. Governance primarily should be in place to protect the average good person from doing something ill-considered; such as accounting for funding streams that is irregular and inexact which could cause problems later on, or having slap-dash recruitment practices which are not designed to ensure you are being fair to all applicants. I think EA could benefit from a lot more of that; just because we're well intended and generally more conscientious doesn't mean we're immune from mistakes

There's many ways in which the relationship between EA (nominal) leadership and EA base seems low trust to me. For instance:

  1. I've worked in organisations of 6000 people where any communications had to go to the CEOs office, to ensure everything was "on message"; most people I knew found that process controlling and low trust, but also understandable. For example, a media announcement that team X was committing £Y to a certain area would have financial implications for the whole organisation, which could cause budgetary challenges. This approach reminds me a lot of how EA leadership actively discourages people from speaking to the media, to the point of directly ringing people to discourage them.

  2. when I spoke to some EA leaders about excessive deference within EA, they cited the unilateralists curse as a reason for not speaking up about uncertainties leaders have in key ideas

Edit for context: Basically I feel it's wrong to say we are high trust across the board, and if we're going to discuss trust I feel a more rounded view is needed. So I'm bringing these two anecdata points in to spark another nuance in the conversation, but not making a "cased close" argument about just how representative these instances are or just how justifiable they are.

Probably a consequence of me trying to say how I feel about the vague zeitgeist at the moment, but I think I was complaining mostly about proposals for governance that do seem aimed at eliminating bad actors or bad behaviour.

Governance primarily should be in place to protect the average good person from doing something ill-considered; such as accounting for funding streams that is irregular and inexact which could cause problems later on, or having slap-dash recruitment practices which are not designed to ensure you are being fair to all applicants.

I think this sounds reasonable and I don't object to it, except insofar as I'd want to be sure it was actually pulling its weight and not guarding against hypothetical problems that don't occur or don't matter much. "We weren't paying attention and we lost some money" is common and damaging: yes you should have good accounting. Other stuff I don't know.

I like your point that it doesn't feel like "leadership" (scare quotes because I still don't really believe it exists) don't have as much trust in the community as vice versa. I personally think this is a matter of perception versus reality - most of the time when this has come up "leadership" has argued that they don't actually want people to defer to them and they're not sure why people are getting that impression, etc.

I find the language of the title quite PRish, btw. If you'd titled the post 'keep EA low transparency', I suspect it would have received less support.

This is a fair criticism. Indeed, much of the language and argument is emotional. Perhaps I should have included a disclaimer "this is a post written with emotion, aiming to persuade, please adjust accordingly".

I don't want to not write I'm the way that I'm feeling, though. Especially since it seems to me that the tendency which I'm trying to oppose is also operating at the emotional level.

Curated and popular this week
Relevant opportunities