Sorted by New

Topic Contributions


3 suggestions about jargon in EA

Thanks for this post!

The upside of jargon is that it can efficiently convey a precise and sometimes complex idea. The downside is that jargon will be unfamiliar to most people.

Jargon has another important upside: its use is a marker of in-group belonging. So, especially IRL, employing jargon might be psychologically or socially useful for people who are not immediately perceived as belonging in EA, or feel uncertain whether they are being perceived as belonging or not.

Therefore, when first using a particular piece of jargon in a conversation, post, or whatever, it will often be valuable to provide a brief explanation of what it means, and/or a link to a good source on the topic. This helps people understand what you’re saying, introduces them to a (presumably) useful concept and perhaps body of work, and may make them feel more welcomed and less disorientated or excluded.

Because jargon is a marker of in-group belonging, I fear that giving an unprompted explanation could be alienating to someone who makes the implication that jargon is being explained to them because they're perceived as not belonging. (E.g., "I know what existential risk is! Would this person feel the need to explain this to me if I were white/male/younger?") In some circumstances, explaining jargon unprompted will be appreciated and inclusionary, but I think it's a judgment call.

A Framework for Thinking about the EA Labor Market

I love the idea of gathering this information. But would EA orgs be able to answer the salary questions accurately? I particularly wonder about the question comparing salaries at the org to for-profit companies. If the org isn't paying for compensation data (as many for-profit companies do), they may not really be in a good position to make that comparison. Their employees, especially those who have always worked in nonprofits, may not even know how much they could be making. Perhaps the org could cobble together a guess via Glassdoor, but limitations of the data on there would make that difficult to do meaningfully, not to mention time-consuming.

For orgs willing to share, it would be better to get the granular salary data itself (ideally, correlated to experience and education).

How do we check for flaws in Effective Altruism?

I think "competitors" for key EA orgs, your point #2, are key here. No matter how smart and committed you are, without competitors there is less pressure on you to correct your faults and become the best version of yourself.

Competitors for key EA orgs will also be well-positioned (in some cases, perhaps in the best possible position) to dialogue with the orgs they compete with, improving them and likely also the EA "public sphere."

I don't think an independent auditor that works across EA orgs and mainly focuses on logic would be as high a value-add as competitors for specific orgs. The auditor is not going to be enough of a domain expert to competently evaluate the work of a bunch of different orgs. But I think it's worth thinking more about. Would be curious if you or anyone has more ideas about the specifics of that.