That's an excellent question!
For organization-internal mediations, I guess that's not a problem, because everyone within the org has an interest in the process going well?
One version for grievances between orgs/community members I could think of: Having an EA fund or E2Ger pay all my gigs so I can offer them pro bono and have no financial incentives to botch the outcome.
Plus, I'll definitely want to build a non-EA source of income so that I'm not entirely financially dependent on EA.
Where do you see gaps in these ideas?
Perhaps another consideration against is that it seems potentially bad to me for any one person to be the primary mediator for the EA community. There are some worlds where this position is subtly very influential. I dont think I would want a single person/worldview to have that, in order to avoid systematic mistakes/biases.
Well, good that my values are totally in line with the correct trajectory for EA then!
No, but seriously: I have no idea how to fix this. The best response I can give is: I'd suspect that having one mediator is probably still better than...
Huh, sounds plausible. At the same time, it has me wonder whether EA should imitate the corporate world less here. Wouldn't "Would it be high EV to have an EA insider with competence in this?" be a more relevant question than "Is this something that's already common and generally useful in the non-EA world?"
I guess the heuristic you point at is for avoiding vultures?
What would be cheap tests to determine if this would be valuable?
Good prompt, thanks!
Mediation is a high risk/high reward activity, and I'd only want to work with EA orgs when I'm already sure that I can consistently deliver very high quality. So I started advertising mediation to private people on pay-what-you-want-basis now to build the necessary skill and confidence. If this works out, I'll progress to NGOs in a couple weeks.
The AuthRev and Relating Languages links look like nonsense to me.
I wince every time when I look at their homepages, way too...
Agree with everything.
Your friend sounds delightful! I think actually, what I'm trying to point towards here is closer to "lifestyle anarchism" than classic virtue ethics. Coincidentally, I found myself defaulting back to explaining my values in anarchist terms when I announced my career transition from active EA community builder to baby influencer in my first blog post.
I guess it's no coincidence that Rocky's "on living without idols" is my all-time favorite on the EA forum.
Thanks! I'm still grappling with putting the intuitions behind this post into words, so this is valuable feedback.
Personally, my heuristic in the example you describe is rolling with what I feel like. Considerations that go into that are:
1. Will it kill me? (I'm allergic to red meat)
2. Would I be actively disgusted eating it? (The case for most if not all non-vegetarian stuff.)
3. Do I lack the spoons to have a debate about this, given which amount of pushback/disappointment I expect from the host?
...and when all of them get a "no":
4. Do I feel like my nutr...
Strongly agree!
Actually, the seeds for a bunch of my current knowledge about and approach to community building were sown during various unconferences over the years.
The 2020 Unconference was my first in-person encounter with EA. After my first contact point with EA was reading a bunch of 80k articles which didn't quite seem to have me as part of their target audience, I was very positively surprised by how warm and caring and non-elitist the community was.
I learned to get these things out of EAG(x)s as well. But, had the fancy professional events been my ...
Yep - it reflects how many things in EA already work implicitly. That's one of the things I love about EA. And, I think it would be good if we use this as an explicit model more often, too.
If you want to dive a little bit deeper into these kinds of management practices, you may want to have a look into the Reinventing Organizations-wiki: https://reinventingorganizationswiki.com/en/theory/decision-making/
If you want to dive very, very deep, Frederik Laloux's "Reinventing Organizations" might be a worthwhile read. I'm halfway through, and it helped me build a whole bunch of intuitions for how to do community building better.
My personal gold standard of good organizing is the Advice Process. Description by Burning Nest:
"The general principle is that anyone should be able to make any decision regarding Burning Nest.
Before a decision is made, you must ask advice from those who will be impacted by that decision, and those who are experts on that subject.
Assuming that you follow this process, and honestly try to listen to the advice of others, that advice is yours to evaluate and the decision yours to make."[1]
One of the problems the Advice Process tackles is what anarchist vision...
Yep, expectation-setting like that is super valuable.
I've also written a short facilitation handbook a couple months ago. It's useful for meetups, workshops, and basically any other kind of work with groups. Optimizing for psychological safety is implicit in a bunch of things there.
Thanks! Yep, the "socials is all people want." is a bit of a hyperbole. In addition to the TEAMWORK talks, we also have the Fake Meat - Real Talk reading/discussion group dinners, and will have a talk at the next monthly social, too.
The one-day career workshops sound great, added to the to-do list.
Helps in some situations, yea.
At the same time, in EA, having access to spare cash and potential for impact are not necessarily highly correlated. So, if this becomes the only solution, it might make a bunch of extremely high EV conversations just not happen.
Thanks! Yep, that is totally in line with the fact that the Karma score of the post here is much more mixed than on LessWrong, which definitely is an Askier sphere than EA.
Strong upvote!
I'm constantly putting some effort into automatizing information flows.
E.g., I asked an EA Berlin community member to write a how-to on finding housing in Berlin, because I get that question at least once a month.
If you have more ideas for how to automatize such things, I'd be excited to read about them.
No hero worship at all intended, sorry if it came off like that. I agree with you that way too much of that happens in EA. Rockwell's "On living without idols" is with quite some distance my favorite piece on the EA Forum, and one of my favorite texts on all of the internet.
I'm one of the ~1% of EAs who have a natural tendency to ask for favors too leniently rather than too cautiously, so I would have appreciated knowing these things earlier. The core target audience of this post is people like me.
However, I do think the things I write here might be useful...
Thanks! Yep, I'm definitely an outlier in EA regarding how much I don't care about authority.
I added section 7 a couple hours after publication to account for feedback on the lesswrong side of this post. Now also added a disclaimer at the start:
"Note: The intended message of this post is not "Don't reach out to busy people!", but "Do reach out, and have these things in mind to make it more likely to get a response/if you don't get one." "
Since writing this, I've done a bunch more debating and thinking about how to handle romantic attraction in communities I'm actively involved in responsibly. So, here's the rule I want to commit to from now on:
In any community I'm involved in, I won't be the one driving romantic escalation (or hinting at it) with anyone lower in the institutional hierarchy than me. This applies within 1 month after low-intensity interactions like a 90min workshop and 3 months after high-intensity interactions like a retreat where I was in a lead facilitator role.
Some speci...
I think a more steelmanned version of my initial claim would be that there's a particular type of struggling that corresponds to low-integrity behavior, and that some aspects of current EA culture make it more likely for people to struggle in that particular way. Even (and maybe especially) if they are generally caring and well-meaning and honestly dedicated to the cause.
I think "scarcity mindset" is an okay handle.
A postrationalist friend also pointed out that what I'm talking about corresponds to Buddhism's realm of hungry ghosts. In modern psychological...
Yup, I definitely overgeneralized here and may be completely off. I think there's something where I'm pointing at, and this helps me clarify my thinking. So thanks.
Generally: I by no means want to demonize anyone for struggling. To a significant extent, I buy into a social model of mental health, and mostly see one person's struggling as a symptom of their whole surrounding (social and other) being diseased.
My intention behind this post was to point out some ways in which I think EA is suboptimally organized. The rough claim I was aiming for is this: "It's easier to be a saint in paradise, so let's make EA a bit more paradisic by fixing some of our norms."
Yep, I agree with that point - being untrustworthy and underresourced are definitely not the same thing.
I partially agree.
I love that definition of elites, and can definitely see how it corresponds to to how money, power, and intellectual leadership in EA revolves around the ancient core orgs like CEA, OpenPhil, and 80k.
However, the sections of Doing EA Better that called for more accountability structures in EA left me a bit frightened. The current ways don't seem ideal, but I think there are innumerable ways how formalization of power can make institutions more rather than less molochian, and only a few that actually significantly improve the way things ar...
"How does your last point fit in there though?"
On second thought, I covered anything that's immediately relevant to this topic in section 2.2, which I quickly expanded from the Facebook post this is based on. So yea, 3. should probably be a different EA Forum post entirely. Sorry for my messy reasoning here.
I'll add more object-level discussion of 3. under Kaj Sotala's comment.
Thanks for writing this up. I agree with most of these points. However, not with the last one:
I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well.
If anything, I think the dangers and pitfalls of optimization you mention warrant different community building, not less. Specifically, I see two potential dangers to pulling resources out of community building:
"We established a policy that established members, especially members of the executive, were to refrain from hitting on or sleeping with people in their first year at the society."
This sounds super reasonable for EA, too. How would you enforce/communicate this?
In my club this was done informally, by just telling people the rule, and telling people to knock it off if we saw them violate it, which was sufficient for us.
EA is larger, so you'd have to think harder about enforcement/communication, and the various edge cases. It would certainly depend on the different contexts of different places. The goal of such a policy would be to:
You have ...
Full disclosure, because without it, this post would be a bit phony: I haven't always followed this policy within EA or outside, and took just one or two weeks from first thinking it might be good to implement it in EA to writing this post.
In general, if I write about community dynamics, assume that I think about them this thoroughly not because I'm extraordinary virtuous and clear-sighted in regards to people stuff, but because I'm sometimes socially a bit clumsy and all these models and methods help me function at a level that just comes naturally to oth...
Yep, I'm with Xavier here. The rule incentivizes community builders a bit to not make EA their only social bubble (which is inherently good I think). And it is not without workarounds, all of which cushion the addressed problem.
For example, it encourages local community builders to hand over event facilitation to others more often. And if the rule is publicly known, participants can take a break from events that one leader leads to get around the rule. If participants don't know the rule, they'd get informed about its existence when they hit on an organizer. In either case, the consequence of even intentionally working around the rule would be taking it slow.
Yup, "don't hit on people who don't hit on me first." is a weaker rule I already decided to adhere to in EA before I started thinking about the one outlined in this post. Independent of power, it just seems utterly necessary to manage the gender imbalance.
Yep, the problem this particular rule tries to fix is that of perceived power imbalance and all the troubles that come with it.
It is an imperfect proxy for sure, but non-proxy rules like "No dating if there is a perceived power imbalance." are very, very prone to tempt people into motivated reasoning. It can get very hard for humans to evaluate their power imbalance with Alice when oh damn are these freckles cute. False beliefs, from the inside, feel not like beliefs, but like the truth. Because of that, I wouldn't trust anyone with power who would trust t...
I know that the rule is non-negotiable for people who facilitate retreats under the AuthRev brand.
AuthRev is rather influential in the (especially north american) AR scene, so I wouldn't be surprised if the rule seeped out further from there. I'm not well-networked enough there to know the details. And even if I could, I don't think I'd want to share the saucy stories that lead to people adjusting the timelines upward and downward until they found their current form.
Thanks a lot! Yep, a question I always ask myself in EA's diversity discussions is "Which kind of diversity are we talking about?"
A LessWrong post on the topic you might like if you didn't read it yet is Kaj Sotala's "You can never be universally inclusive".
Don't ask what EA can do for you, ask what you can do for EA.
An obvious-in-hindsight statement I recently heard from a friend:
"If I'd believe that me being around was net negative for EA, I'd leave the community."
While this makes complete sense in theory, it is emotionally difficult to commit to it if most of your friends are in EA. This makes it hard for us to evaluate our impact on the community properly. Motivated reasoning is a thing.
So, it may be wothwhile for us to occasionally reflect on the following questions:
A fancy version might be some form of integration between the EA Forum and e.g. Kialo, where forum accounts can be used to partake in the discussion trees, and forum posts can be used as discussion contributions.
This shifted my opinion towards being agnostic/mildly positive about this public statement.
I'm still concerned that some potential versions of EA getting more explicitly political might be detrimental to our discourse norms for the reasons Duncan, Chris, Liv, and I outlined in our comments. But yea, this amount of public support may definitely nudge grantmakers/donators to invest more into community health. If yes, I'm definitely in favor of that.
Ok these strong down- and disagreement-votes are genuinely mysterious to me now.
The only interpretation that comes to mind is that somebody expects that something bad could come from this offer. I can't imagine anything bad coming from this offer, so I'd appreciate feedback. Both here where I can react, or in my admonymous is fine.
Thanks, that's encouraging feedback!
Anyplace else you think I should advertise this? I already got the first booking. But given the mixed voting score, I don't expect this post to still be read by anyone 2-3 days from now.
Edit: I no longer agree with the content of this comment. Jason convinced me that this pledge is worth more than just applause lights. In addition, I don't think anymore that this is a very appropriate place for a slippery slope-argument.
_____________
I'd like to explain why I won't sign this document, because a voice like mine seems to still be missing from the debate: Someone who is worried about this pledge while at the same time having been thoroughly involved in leftist discourse for several years pre-EA.
So here you go for my TED talk.
I'm not a Sam in ...
I've attended an online LessWrong Community Weekend co-organized by Linda and vouch for her capability to organize unconferences way beyond the level of what I thought possible.
Do you have a link to a smooth definition of "ideational capital"? I googled your citation and found a book, but apparently my skill in deciphering political science essays has massively declined since university.
A meta-level remark: I notice I'm a bit emotionally attached to "memetic capital", because I've thought about these things under the term "memetics" a bunch during the last year. In addition, a person whose understanding of cultural evolution I admire uses to speak about it in terms of memetics, so there's some matters of tribal belonging at play ...
I don't understand how this is relevant to what I'm writing, as I don't intend to do mediation only for people who know AR or circling. But the number of upvotes indicates that others do understand, so I'd like to understand it, too. Jeroen, would you mind elaborating?