Exploring different research directions to find out where in the x-risk research ecosystem I best fit in. Part of the 2018-2020 cohort in FHI's Research Scholars Programme. Previously Executive Director of the Foundational Research Institute (now Center on Long-Term Risk), a project by the Effective Altruism Foundation (but I don't endorse that organization's 'suffering-focused' view on ethics).

Are there superforecasts for existential risk?

Thanks, this is quite useful. I hadn't considered the issue of incentives sufficiently before, and the OP and your comment make me put less weight on the Metaculus x-risk forecasts than I did previously.

(Though I didn't put a lot of absolute weight on them, and I can't think of any decision or downstream discussion that would be significantly affected by the update on Metaculus.)

Concern, and hope

Thanks for explaining. This all makes some sense to me, but I still favor linking on balance.

(I don't think this depends on what the post tells us about "what EAs think". Whether the author of the post is an EA accurately stating their views, or a non-EA trying to harm EA, or whatever - in any case the post seems relevant for assessing how worried we should be about the impacts of certain discussions / social dynamics / political climate on the EA community.)

I do agree that it seems bad to signal boost that post indiscriminately. E.g. I think it would be bad to share without context on Facebook. But *in a discussion on how worried we should be* *about certain social dynamics *I think it's sufficiently important to look at examples of these dynamics.

EDIT: I do agree that the OP could have done more to avoid any suggestion of endorsement. (I thought there was no implied endorsement anyway, but based on your stated reaction and on a closer second reading I think there is room to make this even clearer.) Or perhaps it would have been best to explicitly raise the issue of whether that post was written with the intent to cause harm, and what this might imply for how worried we should be. Still, linking in the right way seems clearly better to me than not linking at all.

Max_Daniel's Shortform

[**Mathematical definitions of heavy-tailedness.** Currently mostly notes to myself - I might turn these into a more accessible post in the future. None of this is original, and might indeed be routine for a maths undergraduate specializing in statistics.]

There are different definitions of when a probability distribution is said to have a *heavy tail, *and several closely related terms*.* They are *not* extensionally equivalent. I.e. there are distributions that are heavy-tailed according to some, but not all common definitions; this is for example true for the log-normal distribution.

Here I'll collect all definitions I encounter, and what I know about how they relate to each other.

I don't think the differences matter for most EA purposes, where the weakest definition that includes e.g. log-normals seems safe to use (except maybe #0 below, which might be too weak). I'm mainly collecting the definitions because I'm curious and because I think they can be an avoidable source of confusion for someone trying to understand discussions involving heavy-tailedness. (The differences might matter for more technical purposes, e.g. when deciding which statistical method to use to analyze certain data.)

There is also a less interesting way in which definitions can differ: a distribution can have a heavy right tail, a heavy left tail, or both. Some definitions thus come in three variants. I'm for now going to ignore this, stating only one variant per definition.

**List of definitions**

*X* will always denote a random variable.

0. *X *is *leptokurtic *(or *super-Gaussian)* iff its kurtosis* *is strictly larger than 3 (which is the kurtosis of e.g. all normal distributions), i.e. *µ_4*/*σ^4* > 3, where *µ_4* = **E**[(*X* - **E**[*X*])^4] is the fourth central moment and *σ* is the standard deviation.

1. *X *has a *heavy right tail* iff the moment-generating function of *X *is infinite at all *t *> 0.

2. *X *is *heavy-tailed *iff it has an infinite *n*th moment for some *n.*

3. *X *is *heavy-tailed *iff it has infinite variance (i.e. infinite 2nd central moment).

4. *X *has a *long right tail *iff for all real numbers *t *the conditional probability **P**[*X *> *x *+ *t *| *X *> *x*] converges to 1 as *x *goes to infinity.

4b. *X *has a *heavy right tail *iff there is a real number *x_0 *such that the conditional mean exceedance (CME) **E**[X - x | X > x] is a strictly increasing function of *x *for *x > x_0.* (This is a definition by Bryson, 1974, who may have coined the term 'heavy-tailed' and shows that distributions with constant CME are precisely the exponential distributions.)

5. *X *is *subexponential *(or fulfills the *catastrophe principle*)* *iff for all *n *> 0 and i.i.d. random variables *X_1, ..., X_n *with the same distribution as *X* the quotients of probabilities **P**[*X_1 *+ ... + *X_n *> *x*] / **P**[max(*X_1, ..., X_n*)] converges to 1 as *x *goes to infinity.

6. *X *has a *regularly varying right tail *with *tail index *0 < *α* ≤ 2 iff there is a slowly varying function *L: (0,+∞) → (0,+∞)* such that for all *x *> 0 we have **P**[X > x] = *x*^(-*α*) * *L*(*x*). (*L *is *slowly varying *iff, for all *a *> 0, the quotient *L*(*ax*)/*L*(*x*) converges to 1 as *x* goes to infinity.)

**Relationships between definitions**

(Note that even for those I state without caveats I haven't convinced myself of a proof in detail.)

I'll use #0 to refer to the clause on the right hand side of the "iff" statement in definition 0, and so on.

(For some of these one might have to use the suitable versions of heavy right tail / left tail etc. - e.g. perhaps #1 needs to be replaced with "heavy right and left tail" or "heavy right or left tail" etc.)

- I suspect that #0 is the weakest condition, i.e. that all other definitions imply that
*X*is super-Gaussian. - I suspect that #6 is the strongest condition, i.e. implies all others.
- I think that: #3 => #2 => #1 and #5 => #4 => #1 (where '=>' denotes implications).

Why I think that:

*#0 weakest:*Heuristically, many other definitions state or imply that some higher moments don't exist, or are at least "close" to such a condition (e.g. #1). By contrast, #0 merely requires that a certain moment is larger than for the normal distribution. Also, the exponential distribution is super-Gaussian but not usually considered to be heavy-tailed - in fact, "heavy-tailed" is sometimes loosely explained to mean "having heavier tails than an exponential distribution".*#6 strongest:*The condition basically says that the distribution behaves like a Pareto distribution (or "power law") as we look further down the tail. And for Pareto distributions with*α*≤ 2 it's well known and easy to see that the variance doesn't exist, i.e. #3 holds. Similarly, I've seen power laws being cited as examples of distributions fulfilling the catastrophe principle, i.e. #5.- #3 => #2 is obvious.
- #2 => #1: A statement very close to the contrapositive is well known: if the moment-generating function exists in an open neighborhood around some value, then the
*n*th moments about that value are given by the*n*th derivative of the moment-generating function at that value. (I'm not sure if there can be weird cases where the moment-generating function exists in some points but no open interval.) - #5 => #4 and #4 => #1 are stated on Wikipedia.

Concern, and hope

I don't have strong views on this, but I'm curious why you think linking to instances of bad behavior is bad. All the reasons I can think of don't seem to apply here - e.g. the link clearly isn't an endorsement, and it's not providing resources e.g. through increased ad revenues or increasing page rank.

By contrast, I found the link to the post useful because it's evidence about community health and people's reactions: the fact that someone wrote that post updated me toward being more worried (though I think I'm still much less worried than the OP, and for somewhat different reasons). And I don't think I could have made the same update without skimming the actual post. I.e. simply reading a brief description like "someone made a post saying X in a way I think was bad" wouldn't have been as epistemically useful.

I would guess this upside applies to most readers. So I'm wondering which countervailing downsides would recommend a policy of not linking to such posts.

3 suggestions about jargon in EA

Other examples might be public health messaging. E.g. I've heard anecdotal claims that it's a deliberate choice not to emphasize, say, the absolute risk of contracting HIV per instance of unprotected sex with an infected person.

3 suggestions about jargon in EA

It sounds like we're at least roughly on the same page. I certainly agree that e.g. Greaves and Mogensen *don't *seem to* *think that "the long-term effect of our actions is hard to predict, but this is just a more pronounced version of it being harder to predict the weather in one day than in one hour, and we can't say anything interesting about this".

As I said:

To be fair, it's motivated bya bitmore, but arguably not much more: that bit is roughly thatsomephilosophers think that the "hard to predict" observation at least suggests one can say something philosophically interesting about it, and in particular that it might pose some kind of challenge for standard accounts of reasoning under uncertainty such as expected value.

I would still guess that to the extent that these two (and other) philosophers advance more specific accounts of cluelessness - say, non-sharp credence functions - they don't take their *specific *proposal to be part of the definition of cluelessness, or to be criterion for whether the term 'cluelessness' refers to anything at all. E.g. suppose philosopher A thought that cluelessness involves non-sharp credence functions, but then philosopher B convinces them that the same epistemic state is better described by having sharp credence functions with low resilience (i.e. likely to change a lot in response to new evidence). I'd guess that philosopher A would say "you've convinced me that cluelessness is just low credal resilience instead of having sharp credence functions" as opposed to "you've convinced me that I should discard the concept of cluelessness - there is no cluelessness, just low credal resilience".

(To be clear, I think in principle either of theses uses of cluelessness would be possible. I'm also less confident that my perception of the common use is correct than I am for terms that are older and have a larger literature attached to them, such as the examples I gave in my previous comment.)

Andreas Mogensen's "Maximal Cluelessness"

My belief that cluelessness is important is fairly independent of any specific philosophical/technical account of cluelessness. In particular, I don't think me changing my mind on whether credence functions have to be sharp would significantly change my views on the importance of cluelessness.

In this comment I've explained in more detail what I think about the relationship between the basic idea and specific philosophical theories trying to describe it.

(FWIW, I don't feel like I have a well-informed view on whether credence functions have to be sharp. If anything, I have a weak intuition that it's a bit more likely than not that I'd conclude they have to be if I spent more time looking into the question.)

3 suggestions about jargon in EA

I actually think this is a tricky case where the boundary to misuse is hard to discern. (I do agree that, in many contexts, the idea can and should "be easily and concisely expressed without jargon".)

This is because I think philosophical work on cluelessness is at its core motivated by it being "extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive". To be fair, it's motivated by *a bit *more, but arguably not much more: that bit is roughly that *some* philosophers think that the "hard to predict" observation at least suggests one can say something philosophically interesting about it, and in particular that it might pose some kind of challenge for standard accounts of reasoning under uncertainty such as expected value. But importantly, there is no consensus about what the correct "theoretical account" of cluelessness is: to name just a few, it might be non-sharp credences, it might be low credal resilience, or it might just be a case where expected-value reasoning is hard but we can't say anything interesting about it after all. Still, cluelessness is a term proponents of all these different views use.

I think this is a quite common situation in philosophy: at least some people have a 'pre-theoretic intuition' that *seems* to point to some philosophically interesting concept, but philosophers can't agree on what it is, what its properties are, or even whether the intuition refers to anything at all. Analogs might be:

- 'Free will'. Philosophers can't agree if it's about being responsive to reasons, having a mesh of desires with certain properties, being ultimately responsible for one's actions, a "could have done otherwise" ability, or something else; whether it's compatible with determinism; whether it's simply an illusion. But it would be odd to say that "using free will to simply refer to the idea that certain (non-epistemic) conditions need to be fulfilled for someone to be morally responsible for their actions - in a way in which e.g. an addict isn't" was a "misuse" of the term because free will is a technical term in philosophy by which people mean something more specific.
- 'Truth'. Philosophers can't agree if it's about correspondence to reality, coherence, or something else; whether it's foundational for meaning or the other way around; or if it's just a word the semantics of which is fully captured by sentences like "'snow is white' is true if and only if snow is white" and we can't have any interesting theory about. But it would be odd to say that garden-variety uses of "true" are misuses.
- 'Consciousness': ...
- And so on.

3 suggestions about jargon in EA

I agree with these recommendations, thanks for providing a resource one can conveniently link to. (I also thought I remembered a very similar post from a couple of years ago, but wasn't able to find it. So maybe I made that up.)

I still remember an amusing instance of what you address in #3: A few years ago, an EA colleague implied that the metaphor "carving reality at its joint" was rationalist/LessWrong terminology. But in fact it's commonly thought to be coined by Plato, and frequently used in academic philosophy.

Thanks, this is a great contribution!

I'd like to nominate Paul Christiano's

On Progress and Prosperity. It best fits under Cause-prioritization or Long-term future.(As an aside, I think it would be valuable to have a similar list highlighting the best posts from Paul Christiano's

Rational Altruistblog. They are all from 2014 or older.)