Hide table of contents

Mediators help individuals and organizations resolve conflict. They serve as a buffer when strong emotions come up, help the conflict parties clarify their wants (which are often enough more compatible than it seems at first sight), and help make clear agreements.

Over the years, I've heard of conflicts within a variety of EA organizations. Professional mediation might have helped resolve many of them earlier and more smoothly.

I already bring a solid background in facilitation, counseling, and minimal training in mediation. In addition, my next career step is open.

Accordingly, I'm currently thinking of training up to become a mediator for the EA/rationalist ecosystem. Services I'd include in that role would be:

  • Offering conflict mediation for individuals and organizations.
  • Giving workshops for EAs on how to disagree better.
  • Offering communication trainings for organizations to a) build healthy team cultures that prevent conflict in the first place, and b) transform friction into clarity and cooperation. (I'm already working on formats for this with the team behind AuthRev and the Relating Languages.)

Do you think this is a good idea? If yes, my next step would be to apply for 6-12 months of transition funding in order to do specialized skill-building and networking.

Here are the reasons for and against this that I've come up with so far:

Reasons for

  • Especially after last year, there are a some boiling conflict lines within EA. And, the agile startup environments of EA orgs offer plenty potential for friction.
  • It may be valuable to have an "in-house" mediater who has an in-depth understanding of EA culture, the local conflict lines, etc.
  • As far as I know, no one else currently specializes in this.
  • While the average EA likes to have controversial intellectual debates, I perceive the community as relatively conflict-averse when things get emotional. I tend to enjoy conflict and have an easy time trusting the process. I think that's useful for filling this role.
  • Trust in EA leadership seems to be at an all-time low. While I've heard that CEA's community health team is remarkably good at not being partisan, some people might be more comfortable with having an EA mediator who is not directly involved with CEA.

Reasons against

  • It may be hard to convince those who'd profit from mediation to actually make use of it. (Just as with therapy or coaching.) I.e., there might not actually be a market for this.
  • Subcultural knowledge may be less important than I think. External mediators may be able to fulfill this role just fine.
  • The community health team, as well as the current coaches and therapists in EA, might already be sufficiently skilled and a sufficiently obvious address in case of conflict.

33

0
0

Reactions

0
0
Comments13


Sorted by Click to highlight new comments since:

What would be cheap tests to determine if this would be valuable? 

I'm not sure I love the next step of a 6-12 month transition grant. That seems like a rather expensive first step! Why not first see if you can develop an MVP mediation service in two weeks? Offer it to EA organizations and perhaps someone will bite. I suspect you would learn whether this is a good idea much faster that way.

From my perspective this seems like a project of dubious value. The AuthRev and Relating Languages links look like nonsense to me. That said, I think I'm generally more close-minded and than other effective altruists so take my opinion with a grain of salt. If you believe you're onto something, I think it's better to test your hypothesis and prove the skeptics wrong.

What would be cheap tests to determine if this would be valuable? 

Good prompt, thanks!

Mediation is a high risk/high reward activity, and I'd only want to work with EA orgs when I'm already sure that I can consistently deliver very high quality. So I started advertising mediation to private people on pay-what-you-want-basis now to build the necessary skill and confidence. If this works out, I'll progress to NGOs in a couple weeks.

The AuthRev and Relating Languages links look like nonsense to me.

I wince every time when I look at their homepages, way too optimized for selling stuff to a mainstream audience rather than providing value to rationalish people.

But, if you think Authentic Relating and Circling are legit (which a bunch of EAs in at least Germany and the Bay do), it makes sense to take AuthRev pretty seriously. Their facilitator trainings and their 350-page authentic relating games manual make them one of the core pillars of the community. Plus, some early-days CFAR folks were involved in co-founding the company.

That impression is very valuable evidence though. Afaict, AR is way more popular among EAs younger than the grantmaker generation.

There are still a lot of young EAs that aren't into AuthRev and circling, so I think as a mediator it's important to take this into account.

I don't understand how this is relevant to what I'm writing, as I don't intend to do mediation only for people who know AR or circling. But the number of upvotes indicates that others do understand, so I'd like to understand it, too. Jeroen, would you mind elaborating?

Some people might not be a fan of AR or circling, so other methods of mediation should be considered too.

It seems unlikely to me that funding you or someone else to try to gain these skill would be a competitive grant application. In general it makes sense for individual to self-finance skills that they can use broadly and for employer to finance more narrowly useful skills. EAs sometime finance technical alignment upskilling, but that it because they want to subsidize the entire field; there is not similar argument for supporting 'mediation' as a field.

Huh, sounds plausible. At the same time, it has me wonder whether EA should imitate the corporate world less here. Wouldn't "Would it be high EV to have an EA insider with competence in this?" be a more relevant question than "Is this something that's already common and generally useful in the non-EA world?"

I guess the heuristic you point at is for avoiding vultures?

Perhaps another consideration against is that it seems potentially bad to me for any one person to be the primary mediator for the EA community. There are some worlds where this position is subtly very influential. I dont think I would want a single person/worldview to have that, in order to avoid systematic mistakes/biases. To be clear, this is not intended as a personal comment - I have no context on you besides this post.

I am excited about having better community mediation though. Perhaps you coordinating a group/arrangement with external people could be a great idea.

Also I think this kind of post about personal career plans with detailed considerations is great so thanks for writing it.

Perhaps another consideration against is that it seems potentially bad to me for any one person to be the primary mediator for the EA community. There are some worlds where this position is subtly very influential. I dont think I would want a single person/worldview to have that, in order to avoid systematic mistakes/biases.

Well, good that my values are totally in line with the correct trajectory for EA then!

No, but seriously: I have no idea how to fix this. The best response I can give is: I'd suspect that having one mediator is probably still better than having zero mediators. Let's not make the perfect the enemy of the good. Plus, it's an essential part of the role to just be a catalyst for the conflict parties rather than try and steer the outcome towards any particular direction. (Of course, that is an ideal that is not perfectly aligned with how humans actually work.)

Perhaps you coordinating a group/arrangement with external people could be a great idea.

So far, every single time I've done ops work without guidance and under precarious financial circumstances has made me miserable and lead to outcomes I was less than satisfied with. I'm definitely not the right person to do this.

Plus, I have some evidence this will probably not work within any reasonable amount of effort: One person with an insider perspective of many EA orgs' conflicts said that so far, the limiting factor for hiring an external mediator was having one available who is sufficiently trusted. I.e., being known and trusted in the community is crucial for actually doing this. It's hard enough to build a reputation for myself, even if I'm around at conferences and in the forums a lot. Building a reputation on behalf of external mediators I work with seems like a near impossible task.

How would you maintain your independence as a third-party neutral? The two usual approaches to mitigate the risk or at least appearance of partiality are that the disputants split the cost, or that the neutral is part of a large panel such that their livelihood isn't materially dependent on a disputant's good will.

That's an excellent question!

For organization-internal mediations, I guess that's not a problem, because everyone within the org has an interest in the process going well?

One version for grievances between orgs/community members I could think of: Having an EA fund or E2Ger pay all my gigs so I can offer them pro bono and have no financial incentives to botch the outcome.

Plus, I'll definitely want to build a non-EA source of income so that I'm not entirely financially dependent on EA.

Where do you see gaps in these ideas?

I'm assuming a somewhat looser standard than the norms for mediators generally, in light of the parties' presumed interest in an EA-associated mediator. However, in my view, the conflict standards for third-party neutrals are significantly higher than just about any other role type, and rightfully so.

I think having an E2Ger as benefactor is probably the best practicable answer to conflicts, although you would inherit all the conflicts any major benefactor.  I would probably not try to mediate any matter in which a reasonable person might reasonably question the impartiality of any major (over 10-20%??) funder of your work. Hopefully, you could find a E2Ger without many conflicts?

If you're dependent on a fund for more than 10-20%, I think that conflict would extend to all the fund managers in a position to vote on your grants, and the organizations that employ them. So taking money from a fund would probably preclude you from working on matters involving many of the major organizations. In my view, a reasonable person could question whether a mediator could be impartial toward Org X when someone from Org X had a vote on whether to renew one of your major grants [or a vote on a major grant you intended to submit].

Some of that is potentially waivable where both parties to the dispute have approximately equal power, but I do not think it would be appropriate to waive the potential appearance of influence where a significant power imbalance existed in favor of the funder.

One challenge you'll want to think about is how to demonstrate your effectiveness to your funder(s) while maintaining confidentiality of the parties (unless you obtain a waiver from them to disclose information to the funder(s)).

Could you ask people to anonymously submit things they'd like mediation on? If you get some it suggests it would be valuable (if you get none that seems a weaker signal since that's what I'd expect unless the post got seen a lot)

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f