Hide table of contents

Mediators help individuals and organizations resolve conflict. They serve as a buffer when strong emotions come up, help the conflict parties clarify their wants (which are often enough more compatible than it seems at first sight), and help make clear agreements.

Over the years, I've heard of conflicts within a variety of EA organizations. Professional mediation might have helped resolve many of them earlier and more smoothly.

I already bring a solid background in facilitation, counseling, and minimal training in mediation. In addition, my next career step is open.

Accordingly, I'm currently thinking of training up to become a mediator for the EA/rationalist ecosystem. Services I'd include in that role would be:

  • Offering conflict mediation for individuals and organizations.
  • Giving workshops for EAs on how to disagree better.
  • Offering communication trainings for organizations to a) build healthy team cultures that prevent conflict in the first place, and b) transform friction into clarity and cooperation. (I'm already working on formats for this with the team behind AuthRev and the Relating Languages.)

Do you think this is a good idea? If yes, my next step would be to apply for 6-12 months of transition funding in order to do specialized skill-building and networking.

Here are the reasons for and against this that I've come up with so far:

Reasons for

  • Especially after last year, there are a some boiling conflict lines within EA. And, the agile startup environments of EA orgs offer plenty potential for friction.
  • It may be valuable to have an "in-house" mediater who has an in-depth understanding of EA culture, the local conflict lines, etc.
  • As far as I know, no one else currently specializes in this.
  • While the average EA likes to have controversial intellectual debates, I perceive the community as relatively conflict-averse when things get emotional. I tend to enjoy conflict and have an easy time trusting the process. I think that's useful for filling this role.
  • Trust in EA leadership seems to be at an all-time low. While I've heard that CEA's community health team is remarkably good at not being partisan, some people might be more comfortable with having an EA mediator who is not directly involved with CEA.

Reasons against

  • It may be hard to convince those who'd profit from mediation to actually make use of it. (Just as with therapy or coaching.) I.e., there might not actually be a market for this.
  • Subcultural knowledge may be less important than I think. External mediators may be able to fulfill this role just fine.
  • The community health team, as well as the current coaches and therapists in EA, might already be sufficiently skilled and a sufficiently obvious address in case of conflict.

33

0
0

Reactions

0
0
Comments13


Sorted by Click to highlight new comments since:

What would be cheap tests to determine if this would be valuable? 

I'm not sure I love the next step of a 6-12 month transition grant. That seems like a rather expensive first step! Why not first see if you can develop an MVP mediation service in two weeks? Offer it to EA organizations and perhaps someone will bite. I suspect you would learn whether this is a good idea much faster that way.

From my perspective this seems like a project of dubious value. The AuthRev and Relating Languages links look like nonsense to me. That said, I think I'm generally more close-minded and than other effective altruists so take my opinion with a grain of salt. If you believe you're onto something, I think it's better to test your hypothesis and prove the skeptics wrong.

What would be cheap tests to determine if this would be valuable? 

Good prompt, thanks!

Mediation is a high risk/high reward activity, and I'd only want to work with EA orgs when I'm already sure that I can consistently deliver very high quality. So I started advertising mediation to private people on pay-what-you-want-basis now to build the necessary skill and confidence. If this works out, I'll progress to NGOs in a couple weeks.

The AuthRev and Relating Languages links look like nonsense to me.

I wince every time when I look at their homepages, way too optimized for selling stuff to a mainstream audience rather than providing value to rationalish people.

But, if you think Authentic Relating and Circling are legit (which a bunch of EAs in at least Germany and the Bay do), it makes sense to take AuthRev pretty seriously. Their facilitator trainings and their 350-page authentic relating games manual make them one of the core pillars of the community. Plus, some early-days CFAR folks were involved in co-founding the company.

That impression is very valuable evidence though. Afaict, AR is way more popular among EAs younger than the grantmaker generation.

There are still a lot of young EAs that aren't into AuthRev and circling, so I think as a mediator it's important to take this into account.

I don't understand how this is relevant to what I'm writing, as I don't intend to do mediation only for people who know AR or circling. But the number of upvotes indicates that others do understand, so I'd like to understand it, too. Jeroen, would you mind elaborating?

Some people might not be a fan of AR or circling, so other methods of mediation should be considered too.

It seems unlikely to me that funding you or someone else to try to gain these skill would be a competitive grant application. In general it makes sense for individual to self-finance skills that they can use broadly and for employer to finance more narrowly useful skills. EAs sometime finance technical alignment upskilling, but that it because they want to subsidize the entire field; there is not similar argument for supporting 'mediation' as a field.

Huh, sounds plausible. At the same time, it has me wonder whether EA should imitate the corporate world less here. Wouldn't "Would it be high EV to have an EA insider with competence in this?" be a more relevant question than "Is this something that's already common and generally useful in the non-EA world?"

I guess the heuristic you point at is for avoiding vultures?

Perhaps another consideration against is that it seems potentially bad to me for any one person to be the primary mediator for the EA community. There are some worlds where this position is subtly very influential. I dont think I would want a single person/worldview to have that, in order to avoid systematic mistakes/biases. To be clear, this is not intended as a personal comment - I have no context on you besides this post.

I am excited about having better community mediation though. Perhaps you coordinating a group/arrangement with external people could be a great idea.

Also I think this kind of post about personal career plans with detailed considerations is great so thanks for writing it.

Perhaps another consideration against is that it seems potentially bad to me for any one person to be the primary mediator for the EA community. There are some worlds where this position is subtly very influential. I dont think I would want a single person/worldview to have that, in order to avoid systematic mistakes/biases.

Well, good that my values are totally in line with the correct trajectory for EA then!

No, but seriously: I have no idea how to fix this. The best response I can give is: I'd suspect that having one mediator is probably still better than having zero mediators. Let's not make the perfect the enemy of the good. Plus, it's an essential part of the role to just be a catalyst for the conflict parties rather than try and steer the outcome towards any particular direction. (Of course, that is an ideal that is not perfectly aligned with how humans actually work.)

Perhaps you coordinating a group/arrangement with external people could be a great idea.

So far, every single time I've done ops work without guidance and under precarious financial circumstances has made me miserable and lead to outcomes I was less than satisfied with. I'm definitely not the right person to do this.

Plus, I have some evidence this will probably not work within any reasonable amount of effort: One person with an insider perspective of many EA orgs' conflicts said that so far, the limiting factor for hiring an external mediator was having one available who is sufficiently trusted. I.e., being known and trusted in the community is crucial for actually doing this. It's hard enough to build a reputation for myself, even if I'm around at conferences and in the forums a lot. Building a reputation on behalf of external mediators I work with seems like a near impossible task.

How would you maintain your independence as a third-party neutral? The two usual approaches to mitigate the risk or at least appearance of partiality are that the disputants split the cost, or that the neutral is part of a large panel such that their livelihood isn't materially dependent on a disputant's good will.

That's an excellent question!

For organization-internal mediations, I guess that's not a problem, because everyone within the org has an interest in the process going well?

One version for grievances between orgs/community members I could think of: Having an EA fund or E2Ger pay all my gigs so I can offer them pro bono and have no financial incentives to botch the outcome.

Plus, I'll definitely want to build a non-EA source of income so that I'm not entirely financially dependent on EA.

Where do you see gaps in these ideas?

I'm assuming a somewhat looser standard than the norms for mediators generally, in light of the parties' presumed interest in an EA-associated mediator. However, in my view, the conflict standards for third-party neutrals are significantly higher than just about any other role type, and rightfully so.

I think having an E2Ger as benefactor is probably the best practicable answer to conflicts, although you would inherit all the conflicts any major benefactor.  I would probably not try to mediate any matter in which a reasonable person might reasonably question the impartiality of any major (over 10-20%??) funder of your work. Hopefully, you could find a E2Ger without many conflicts?

If you're dependent on a fund for more than 10-20%, I think that conflict would extend to all the fund managers in a position to vote on your grants, and the organizations that employ them. So taking money from a fund would probably preclude you from working on matters involving many of the major organizations. In my view, a reasonable person could question whether a mediator could be impartial toward Org X when someone from Org X had a vote on whether to renew one of your major grants [or a vote on a major grant you intended to submit].

Some of that is potentially waivable where both parties to the dispute have approximately equal power, but I do not think it would be appropriate to waive the potential appearance of influence where a significant power imbalance existed in favor of the funder.

One challenge you'll want to think about is how to demonstrate your effectiveness to your funder(s) while maintaining confidentiality of the parties (unless you obtain a waiver from them to disclose information to the funder(s)).

Could you ask people to anonymously submit things they'd like mediation on? If you get some it suggests it would be valuable (if you get none that seems a weaker signal since that's what I'd expect unless the post got seen a lot)

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
Recent opportunities in Career choice
13
Ryan Kidd
·
54