[ Question ]

What are words, phrases, or topics that you think most EAs don't know about but should?

by Ozzie Gooen1 min read21st Jan 202019 comments



I think there's a lot of great literature that's relevant for EA purposes. Sometimes specific phrases can act as useful keywords.

If we use similar language as other academic fields, then:

  1. Other groups can understand Effective Altruist writing easier.
  2. Effective Altruists can more easily search for existing literature and discussion.

I've recently been doing some surveying of different fields and finding a lot of terminology I think is both (1) not currently used by many people here, and (2) would be interesting to them.

This can be as simple as an interesting Wikipedia page. I think there are tons of interesting Wikipedia pages I don't yet know to search, but would get a lot value out of if I did.

When submitting, if it's not obvious, I suggest adding information about why this could be interesting to other EAs.

New Answer
Ask Related Question
New Comment

11 Answers

Some while ago, Peter McIntyre and Jesse Avshalomov compiled a list of concepts they deemed worth knowing. I can imagine that many are pretty well known within EA, but I’ll go out on a limb and say I woudn‘t be surprised if most EAs will find more than one useful new concept. https://conceptually.org/concepts


The principle that evidence from independent, unrelated sources can "converge" on strong conclusions

This word can arguably be used to describe the "Many Weak Arguments" aspect of the "Many Weak Arguments vs. One Relatively Strong Argument" post. JonahSinick pointed that out in that post.

Why this is interesting
Consilience is important for evaluating claims. There's a fair bit of historic discussion and evidence now that shows how useful it can be to get a variety of evidence from many different sources.

Credible Interval:

In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution.[1] The generalisation to multivariate problems is the credible region. Credible intervals are analogous to confidence intervals in frequentist statistics,[2] although they differ on a philosophical basis:[3] Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specific prior distribution, while the frequentist confidence intervals do not.


Credence is a statistical term that expresses how much a person believes that a proposition is true

Why this matters:

It seems like a lot of questions EAs are interested in involves subjective Bayesian probabilities. A lot of people misuse the frequentist term "confidence interval" for these purposes (to be fair, this isn't just a problem with EAs/rationalists, I've seen scientists make this mistake too, akin to how the p-value is commonly misunderstood). I think it's helpful to use the right statistical jargon so we can more easily engage with the statistical literature, and with statisticians.

Thank you for writing this! I once failed a job interview because what I learned from the EA community as a 'confidence interval' was actually a credible interval. Pretty embarrassing.

4Linch1yWow that's an awfully specific way to fail a job interview! But I'm glad you've learned something from it, at least?

Nobel Cause Corruption

From Wikipedia:

Noble cause corruption is corruption caused by the adherence to a teleological ethical system, suggesting that people will use unethical or illegal means to attain desirable goals, a result which appears to benefit the greater good. Where traditional corruption is defined by personal gain, noble cause corruption forms when someone is convinced of their righteousness, and will do anything within their powers to achieve the desired result. An example of noble cause corruption is police misconduct "committed in the name of good ends" or neglect of due process through "a moral commitment to make the world a safer place to live."

Why this is interesting
I think one serious concern around consequentialist thought is that it can be used in dangerous ways. I think this term describes some of this, and the corresponding literature provides examples that seem similar to what I can expect future people to follow who misuse EA content.

Nobel Cause Corruption

Is this about how the Peace Prize is given out to either warmongers or ineffective activists rather than professional diplomats and international supply chain managers?


In political science literature, "governance" refers to how something is overseen and managed whether or not that's done by Government. For example, if your AI system has to comply with a few regulations, but you're also responsible to your company's ethics board and shareholders, that's all governance.

Relevant for

EAs in politics, policy or institutional change. Particularly useful for EAs interested in AI policy where a wider conception of governance is arguably much more desirable than direct government regulation.

Endogenous institutions

In political and economic literature, institutions include formal groups (eg the Civil Service, the Church of England, the monarchy) but also the overall "rules of the game" (eg to what extent politicians are comfortable accepting bribes/gifts/political donations in exchange for political influence). These rules affect the people "playing the game" eg lobbyists and politicians, but they're also created by them.

Relevant for

EAs working on politics, lobbying or institutional change.

Optimisers curse / Regression to the mean

On how trying to optimise can lead you to make mistakes

Related: Goodhart's Law

"When a measure becomes a target, it ceases to be a good measure"


2MichaelA1yI second the importance of these three terms/concepts. In case anyone stumbles across this post in future, here [https://www.lesswrong.com/posts/5gQLrJr2yhPzMCcni/the-optimizer-s-curse-and-how-to-beat-it] are two [https://forum.effectivealtruism.org/posts/Wghi6hpu5gGBZHvtj/link-the-optimizer-s-curse-and-wrong-way-reductions] sources on the optimizer's curse. (I thought the first was great. I personally disagree with the second on various points, but mention it as other people seemed to find it good and there's good discussion in the comments.) And here [https://www.lesswrong.com/posts/urZzJPwHtjewdKKHc/using-expected-utility-for-good-hart] are some [https://www.lesswrong.com/posts/PADPJ3xac5ogjEGwA/defeating-goodhart-and-the-closest-unblocked-strategy] sources on [https://www.lesswrong.com/posts/YJq6R9Wgk5Atjx54D/does-bayes-beat-goodheart] Goodhart's law.

The Cooperative Principle

The cooperative principle describes how people achieve effective conversational communication in common social situations—that is, how listeners and speakers act cooperatively and mutually accept one another to be understood in a particular way.

There are 4 corresponding maxims. I think the main non-obvious ones are:

Maxim of quantity:

  1. Make your contribution as informative as is required (for the current purposes of the exchange).
  2. Do not make your contribution more informative than is required.

Maxim of relevance

  1. Be relevant to the discussion. (For instance, when responding to, "What would you like for lunch" and you respond "I would like a sandwhich"; you are expected to be responding to that very question, not to be making an unrelated statement.)

I think this video explains this well.

Why this is interesting
I've definitely been in conversations where bringing up maxims of quantity and relevance would have been useful to bring up. Conversation and discussion can be quite difficult. We do a lot of that.

Sometimes the term "the Gricean maxims" (or "Grice's maxims") is used instead of "the Cooperative Principle" as the principal term. I personally find it more memorable, since "the Cooperative Principle" could mean so many things.

Can you give an example of such a conversation, as well as the thought process towards bringing them up? I hear about conversational principles like these, but I don't know how to get from "vague feeling that something is wrong with the conversation" to "I think you're confusing me with excess information".

2Ozzie Gooen1yA very simple example might be someone saying, "What's up?" and the other person saying "The sky.". "What's up?" assumes a shared amount context. To be relevant, it would make much more sense for it to be asking how the other person is doing. There are a bunch of youtube videos around the topic, I recall some go into examples.

Normalization of deviance

"Social normalization of deviance means that people within the organization become so much accustomed to a deviant behavior that they don't consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety" [5]. People grow more accustomed to the deviant behavior the more it occurs [6] . To people outside of the organization, the activities seem deviant; however, people within the organization do not recognize the deviance because it is seen as a normal occurrence. In hindsight, people within the organization realize that their seemingly normal behavior was deviant.

(from Wikibooks)

I think this generalizes to cases where there is a stated norm, that norm is regularly violated, and the violation of the norm becomes the new norm.


Scrupulous people or people otherwise committed to particular stances may be concerned about ways in which norms are not upheld around, for example, truth telling, donating, veganism, etc..


The term refers to a statement that is apparently profound but actually asserts a triviality on one level and something meaningless on another. Generally, a deepity has (at least) two meanings: one that is true but trivial, and another that sounds profound, but is essentially false or meaningless and would be "earth-shattering" if true.

Why this is interesting
I mostly think this is just a great phrase to describe a lot of difficult language I occasionally see get used in moral discussions.

Knightian uncertainty / deep uncertainty

a lack of any quantifiable knowledge about some possible occurrence

This means any situation where uncertainty is so high that it is very hard / impossible / foolish to quantify the outcomes.

To understand this it is useful to note the difference between uncertainty (EG 1: The chance of a nuclear war this century) and risk (EG 2: the chance of a coin coming up heads).

The process for making decisions that rely on uncertainty may be very different form the process for making decision that rely on risk. The optimal tactic for making good decisions on situations about deep uncertainty may not be to just quantify the situation.

Why this matters

This could drastically change the causes EAs care about and the approaches they take.

This could alter how we judge the value of taking action that affects the future.

This could means that "rationalist"/LessWrong approach of "shut up and multiply" for making decisions might not be correct.

For example this could shift decisions away from a naive exacted value based on outcomes and probabilities and towards favoring courses of actions that are robust to failure modes, have good feedback loops, have short chains of affects, etc.

(Or maybe not, I don’t know. I don’t know enough about how to make optimal decisions under deep uncertainty but I think it is a thing I would like to understand better.)

See also

The difference between "risk" and "uncertainty". "Black swan events". Etc