Epistemic status: yes. All about epistemics
Introduction
In principle, all that motivates the existence of the EA community is collaboration around a common goal. As the shared goal of preserving the environment characterizes the environmentalist community, say, EA is supposed to be characterized by the shared goal of doing the most good.
But in practice, the EA community shares more than just this abstract goal (let’s grant that it does at least share the stated goal) and the collaborations that result. It also exhibits an unusual distribution of beliefs about various things, like the probability that AI will kill everyone or the externalities of polyamory.
My attitude has long been that, to a first approximation, it doesn’t make sense for EAs to defer to each other’s judgment any more than to anyone else’s on questions lacking consensus. When we do, we land in the kind of echo chamber which convinced environmentalists that nuclear power is more dangerous than most experts think, and which at least to some extent seems to have trapped practically every other social movement, political party, religious community, patriotic country, academic discipline, and school of thought within an academic discipline on record.
This attitude suggests the following template for an EA-motivated line of strategy reasoning, e.g. an EA-motivated econ theory paper:
- Look around at what most people are doing. Assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole; take others’ behavior as a best guess on how to achieve their own goals.
- Work out [what, say, economic theory says about] how to act if you believe what others believe, but replace the goal of “what people typically want” with some conception of “the good”.
And so a lot of my own research has fit this mold, including the core of my work on “patient philanthropy”[1, 2] (if we act like typical funders except that we replace the rate of pure time preference with zero, here’s the formula for how much higher our saving rate should be). The template is hardly my invention, of course. Another example would be Roth Tran’s (2019) paper on “mission hedging” (if a philanthropic investor acts like a typical investor except that they’ll be spending the money on some cause, instead of their own consumption, here’s the formula for how they should tweak how they invest). Or this post on inferring AI timelines from interest rates and setting philanthropic strategy accordingly.
But treating EA thought as generic may not be a good first approximation. Seeing the “EA consensus” be arguably ahead of the curve on some big issues—Covid a few years ago, AI progress more recently—raises the question of whether there’s a better heuristic: one which doesn’t treat these cases as coincidences, but which is still principled enough that we don’t have to worry too much about turning the EA community into [more of] an echo chamber all around. This post argues that there is.
The gist is simple. If you’ve been putting in the effort to follow the evolution of EA thought, you have some “inside knowledge” of how it came to be what it is on some question. (I mean this not in the sense that the evolution of EA thinking is secret, just in the sense that it’s somewhat costly to learn.) If this costly knowledge informs you that EA beliefs on some question are unusual because they started out typical and then updated in light of some idiosyncratic learning, e.g. an EA-motivated research effort, then it’s reasonable for you to update toward them to some extent. On the other hand, if it informs you that EA beliefs on some question have been unusual from the get-go, it makes sense to update the other way, toward the distribution of beliefs among people not involved in the EA community.
This is hardly a mind-blowing point, and I’m sure I’m not the first to explore it.[1] But hopefully I can say something useful about how far it goes and how to distinguish it from more suspicious arguments for ingroup deference.
Disagreement in the abstract
As stated above, the intuition we’re exploring—and ultimately rejecting—is that EAs shouldn’t defer to each other’s judgment any more than to anyone else’s on questions lacking consensus. To shed light where this intuition may have come from, and where it can go wrong, let’s start by reviewing some of the theory surrounding disagreement in the abstract.
One may start out with a probability distribution over some state space, learn something, and then change one’s probability distribution in light of what was learned. The first distribution is then one’s prior and the second is one’s posterior. A formula for going from a prior to a posterior in light of some new information is an updating rule. Bayes’ Rule is the most famous.
Someone’s uninformed prior is their ultimate prior over all possible states of the world: the thing they’re born with, before they get any information at all, and then open their eyes and begin updating from. Two people with different uninformed priors can receive the same information over the course of their lives, both always update their beliefs using the same rule (e.g. Bayes’), and yet have arbitrarily different beliefs at the end about anything they haven’t learned for certain.
Subjective Bayesianism is the view that what it means to be [epistemically] “rational” is simply to update from priors to posteriors using Bayes’ Rule. No uninformed prior is more or less rational than any other (perhaps subject to some mild restrictions). Objective Bayesianism adds the requirement that there’s only one uninformed prior it is rational to have. That is, it’s the view that rationality consists of having the rational uninformed prior at bottom and updating from priors to posteriors using Bayes’ Rule.
For simplicity, through the rest of this post, the term “prior” will always refer to an uninformed prior. We’ll never need to refer to any intermediate sort of prior. People will be thought of as coming to their current beliefs by starting with an [uninformed] prior and then updating, once, on everything they’ve ever learned.
Common knowledge is defined here. Two people have a common prior if they have common knowledge that they have the same prior. So: the condition that two people have common knowledge that they are rational in the Objective Bayesian sense is essentially equivalent to the condition that they have (a) common knowledge that they are rational in the Subjective Bayesian sense and (b) a common prior. Common knowledge may seem like an unrealistically strong assumption for any context, but I believe everything I will say will hold approximately on replacing common knowledge with common p-belief, as defined by Monderer and Samet (1989).
For simplicity, throughout the rest of this post, the term “rationality” will always refer to epistemic rationality in the Subjective Bayesian sense. This is not to take a stand for Subjective Bayesianism; indeed, as you’ll see, this post is written from something of an Objective position. But it will let us straightforwardly refer to assumption (a), common knowledge that everyone updates using Bayes’ Rule, as CKR (“common knowledge of rationality”), and to assumption (b) as CP (“common priors”).
Finally, people will be said to “disagree” about an event if their probabilities for it differ and they have common knowledge of whose is higher. Note that the common knowledge requirement makes this definition of “disagree” stronger than the standard-usage definition: you might disagree with, say, Trump about something in the standard-usage sense, but not in the sense used here, assuming he doesn’t know what you think about it at all.
As it turns out, if a pair of people have CP and CKR, then there is no event about which they disagree. This is Aumann’s (1976) famous “agreement theorem”. It’s often summarized as the claim that “rational people cannot agree to disagree”, though this phrasing can make it seem stronger than it is. Still, it’s a powerful result.
Two people may satisfy CP and CKR, and have different beliefs about some event, if the direction of the difference isn’t common knowledge between them. The difference will simply be due to a difference in information. The mechanism that would tend to eliminate the disagreement—one person updating in the other’s direction—breaks when at least one of the parties doesn’t know which direction to update in.
For example, suppose Jack and Jill have a common prior over the next day’s weather, and common knowledge of the fact they’re both perfectly good at updating on weather forecasts. Then suppose Jack checks his phone. They both know that, unless the posterior probability of rain the next day exactly equals their prior, the probability Jack assigns to rain now differs from Jill’s. But Jill can’t update in Jack’s direction, because she doesn’t know whether to shift her credence up or down.
Likewise, suppose we observe a belief-difference (whose direction isn’t common knowledge, of course) between people satisfying CP and CKR, we trust ourselves to be rational too, and we have these two people’s shared prior. Then we should simply update from our prior in light of whatever we know, including whatever we might know about what each of the others knows. If we see that Jack is reaching for an umbrella, and we see that Jill isn’t because she hasn’t seen Jack, we should update toward rain. Likewise, if we see that a GiveWell researcher assigns a high probability to the event that the Malaria Consortium is the charity whose work most cheaply increases near-term human wellbeing, and we see that some stranger assigns a low probability to any particular charity (including MC), we should update toward MC. There’s no deep mystery about what to do, and we don’t feel troubled when we find ourselves agreeing with one person more than the other.
But we often observe belief-differences between people not satisfying CP and CKR. By the agreement theorem, all disagreements involve departures from CP or CKR: all debates, for instance. And people with different beliefs may lack CP or CKR even when the direction of their disagreement isn’t common knowledge. We may see Jack’s probability of rain differ from Jill’s both because he’s checked the forecast, which predicts rain, and because he’s just more pessimistic about the weather on priors. We may see GiveWell differ from Lant Pritchett both because they’ve done some charity-research Lant doesn’t yet know about and because they’re more pessimistic about economic-growth-focused work than Lant is.
In these cases, what to do is more of a puzzle. If we are to form precise beliefs that are in any sense Bayesian, we ultimately have to make judgments about whose prior (or which other prior) seems more sensible, and about who seems more rational (or, if we think they’re both making mistakes in the same direction, what would be a more rational direction). But the usual recommendation for how to make a judgment about something—just start with our prior, learn what we can (including from the information embedded in others’ disagreements), and update by Bayes’ Rule—now feels unsatisfying. If we trust this judgment to our own priors and our own information processing, but the very problem we’re adjudicating is that people can differ in their priors and/or their abilities to rationally process information, why should we especially trust our own? [2][3]
Our response to some observed possible difference in priors or information-processing abilities, as opposed to differences in information, might be called “epistemically modest” to the extent that it involves giving equal weight to others’ judgments. I won’t try to define epistemic modesty more precisely here, since how exactly to formalize and act on our intuitions in its favor is, to my understanding, basically an unsolved challenge.[4] It’s not as simple as, say, just splitting the difference among people; everyone else presumably thinks they’re already doing this to the appropriate degree. But I think it’s hard to deny that at least some sort of epistemic modesty, at least sometimes, must be on the right track.
In sum: when we see people’s beliefs differ, deciding what to believe poses theoretical challenges to the extent that we can attribute the belief-difference to a lack of CP or CKR. And the challenges it poses concern how exactly to act on our intuitions for epistemic modesty.
Two mistaken responses to disagreement
This framing helps us spot what are, I think, the two main mistakes made in the face of disagreement.
The first mistake is to attribute the disagreement to an information-difference, implicitly or explicitly, and proceed accordingly.
In the abstract, it’s clear what the problem is here. Disagreements cannot just be due to information-differences.
To reiterate: when belief-differences in some domain are due entirely to differences in information, we just need to get clear on what information we have and what it implies. But a disagreement must be due, at least in part, to (possible) differences in the disagreers’ priors or information-processing abilities. Given such differences, if we’re going to trust to our own (or our friends’) prior and rationality, giving no intrinsic weight to others’ judgments, we need some compelling—perhaps impossible—story about how we’re avoiding epistemic hubris.
Though this might be easy enough to accept in the abstract, it often seems to be forgotten in practice. For example, Eliezer Yudkowsky and Mark Zuckerberg disagree on the probability that AI will cause an existential catastrophe. When this is pointed out, people sometimes respond that they can unproblematically trust Yudkowsky, because Zuckerberg hasn’t engaged nearly as much with the arguments for AI risk. But nothing about the agreement theorem requires that either party learn everything the other knows. Indeed, this is what makes it an interesting result! Under CP and CKR, Zuckerberg would have given higher credence to AI risk purely on observing Yudkowsky’s higher credence, and/or Yudkowsky would have given lower credence to AI risk purely on observing Zuckerberg’s lower credence, until they agreed.[5] The testimony of someone who has been thinking about the problem for decades, like Yudkowsky, is evidence for AI risk—but the fact that Zuckerberg still disbelieves, despite Yudkowsky’s testimony, is evidence against; and the greater we consider Yudkowsky’s expertise, the stronger both pieces of evidence are. Simply assuming that the more knowledgeable party is closer to right, and discarding the evidence given by the other party’s skepticism, is an easy path to an echo chamber.
This is perhaps easier to see when we consider a case where we give little credence to the better-informed side. Sikh scholars (say) presumably tend to be most familiar with the arguments for and against Sikhism, but they shouldn’t dismiss the rest of the world for failing to engage with the arguments. Instead they should learn something from the fact that most people considered Sikhism so implausible as not to engage at all. I make this point more long-windedly here.
Likewise, but more subtly, people sometimes argue that we can inspect how other individuals and communities tend to develop their beliefs, and that when we do, we find that practices in the EA community are exceptionally conducive to curating and aggregating information.
It’s true that some tools, like calibration training, prediction markets, and meta-analysis, do seem to be used more widely within the EA community than elsewhere. But again, this is not enough to explain disagreement. Unless we also explicitly posit some possible irrationality or prior-difference, we’re left wondering why the non-EAs don’t look around and defer to the people using, say, prediction markets. And it’s certainly too quick to infer irrationality from the fact that a given group isn’t using some epistemic tool. Another explanation would be that the tool has costs, and that on at least some sorts of questions, those costs are put to better use in other ways. Indeed, the corporate track record suggests that prediction markets can be boondoggles.
Many communities argue for deference to their own internal consensuses on the basis of their use of different tools. Consider academia’s “only we use peer review”, for instance, or conservatives’ “only we use the wisdom baked into tradition and common sense”. Determining whose SOPs are actually more reliable seems hard, and anyway the reliability presumably depends on the type of question. In short, given disagreement, attempts to attribute belief-differences entirely to one party’s superior knowledge or methodology must ultimately, to some extent, be disguised cases of something along the lines of “They think they’re the rational ones, and so do we, but dammit, we’re right.”
The second mistake is to attribute the disagreement to a (possible) difference in priors or rationality and proceed accordingly.
Again, in the abstract, it’s clear what the problem is here. To the extent that a belief-difference is due to a possible difference in priors or rationality—i.e. a lack of CP or CKR—no one knows how to “proceed accordingly”. We want to avoid proceeding in a way that feels epistemically immodest, but, at least as of this writing, it’s unclear how to operationalize this.[6]
But again, this seems to be a hard lesson to internalize. The most common approach to responding to non-information-driven disagreement in a way that feels epistemically modest—the “template” outlined in the introduction, which I’ve used plenty—is really, on reflection, no solution at all. It’s an attempt to act as if there were no problem.[7] Look at the wording again: “assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole”, and then “act if you believe what others believe”. Given disagreement, what does “on the whole” mean? What on earth do “others believe”? Who even are the “others”? Do infants count, or do they have to have reached some age of maturity? Deferring to “others on the whole” is just another call for some way of aggregating judgments: something the disagreers all already feel they’ve done. The language sounds modest because it implicitly suggests that there’s some sort of monolithic, non-EA supermajority opinion on most issues, and that we can just round this off to “consensus” and unproblematically defer to it. But there isn’t; and even if there were, we couldn’t. Disagreement between a minority and a majority is still disagreement, and deferring to the majority is still taking a stand.
Take even the case for deferring to the probabilities of important events implied by market prices. The probability of an event suggested by the market price of some relevant asset—if such a probability can be pinned down at all—should be expected to incorporate, in some way, all the traders’ information. This mass of implicit information is valuable to anyone, but there’s no clear reason why anyone should also be particularly fond of a wealth-weighted average of the traders’ priors, not to mention information-processing quirks. When people have different priors, they should indeed be expected to bet with each other, including via financial markets (Morris, 1994). If two people trade some asset on account of a difference in their priors about its future value, why should an onlooker adopt the intermediate beliefs that happen to be suggested by the terms of the trade? If one of them is struck by lightning before the trade can be executed, do you really always want to take the other side of the trade, regardless of who was struck?
A minimal solution: update on information, despite not knowing how to deal with other sources of disagreement
Both mistakes start by “attributing” the disagreement to one thing: a difference in information (#1) or a possible difference in priors or rationality (#2). But a disagreement may exhibit both differences. That is—and maybe this is obvious in a way, but it took me a while to internalize!—though a disagreement cannot consist only of a difference in information, a disagreement produced by a lack of CP or CKR can be exacerbated by a difference in information. When we witness such a disagreement, we unfortunately lack a clear way to resolve the bit directly attributable to possible prior- or rationality-differences. But we can still very much learn from the information-differences, just as we can learn something in the non-common-knowledge belief-difference case of Jack, Jill, and the rain.
Sometimes, furthermore, this learning should move us to take an extreme stand on some question regardless of how we deal with the prior- or rationality-differences. That is, knowledge—even incomplete—about why disagreeing parties believe what they believe can give us unusual beliefs, even under the most absolute possible standard of epistemic modesty.
I’ll illustrate this with a simple example in which everyone has CKR but not CP. I’ll do this for two reasons. First, I find relaxations of CP much easier to think about than relaxations of CKR. Second, though the two conditions are not equivalent, many natural relaxations of CKR can be modeled as relaxations of CP: there is a formal similarity between failing to, say, sufficiently increase one’s credence in some event on some piece of evidence and simply having a prior that doesn’t put as much weight on the event given that evidence. (Brandenburger et al. (1992) and Morris (1991, ch. 4) explore the relationship between the conditions more deeply.) In any event, the goal is just to demonstrate the importance of information-differences in the absence of CP and CKR, so we can do this by relaxing either one.
The population is divided evenly among people with two types of priors regarding some event x: skeptics, for whom the prior probability of x is 1/3, and enthusiasts, for whom it’s 1/2. The population exhibits CKR and a common knowledge of the distribution of priors.
It’s common knowledge that Person A has done some research. There is a common prior over what the outcome of the research will be: a 1/10 chance that it will justify increasing one’s credence in x by 1/6 (in absolute terms), a 1/10 chance that it will justify decreasing one’s credence in x by 1/6, and an 8/10 chance that it will be uninformative.
It’s also common knowledge through the population that, after A has conditioned on her research, she assigns x a probability of 1/2. A given member of the population besides A—let’s call one “B”—knows from A’s posterior that her research cannot have made x seem less likely, but he doesn’t know whether A’s posterior is due to the fact that she is a skeptic whose research was informative or an enthusiast whose research was uninformative. B considers the second scenario 8x as likely as the first. Thinking there’s a 1/9 chance that he should increase his credence by 1/6 in light of A’s findings and a 8/9 chance he should leave his credence unchanged, he increases his credence by 1/54. If he was a skeptic, his posterior is 19/54; if he was an enthusiast, his posterior is 28/54.
But an onlooker, C, who knows that A is a skeptic and did informative research will update either to 1/2, if he too is a skeptic, or to 2/3, if he started out an enthusiast. So even if C is confused about which prior to adopt, or how to mix them, he can at least be confident that he’s not being overenthusiastic if he adopts a credence of 1/2. This is true even though there is public disagreement, with half the population assigning x a probability well below 1/2 (the skeptical Bs, with posteriors of 19/54) and half the population assigning x a probability only slightly above 1/2 (the enthusiastic Bs, with posteriors of 28/54). And the disagreement can persist even if C’s own posterior is common knowledge (at least if it’s 1/2, in this example), if others don’t know the reasons for C’s posterior either.[8]
Likewise, an onlooker who knows that A is an enthusiast and did uninformative research will not update at all. He might maintain a credence in x of 1/3. This will be lower even than that of other skeptics, who update slightly on A’s posterior thinking that it might be better informed than it is.
EA applications
So: if we have spent a long time following the EA community, we will often be unusually well-informed about the evolution of an “unusual EA belief”. At least as long as this information remains obscure, it is not necessarily epistemically immodest at all to adopt a belief that is much closer to the EA consensus than the non-EA consensus, given a belief-difference that is common knowledge.
To put this another way, we can partly salvage the idea that EA thought on some question is particularly trustworthy because others “haven’t engaged with the arguments”. Yes, just pointing out that someone hasn’t engaged with the arguments isn’t enough. The fact that she isn’t deferring to EA thought reveals that she has some reason to believe that EA thought isn’t just different from her own on account of being better informed, and sometimes, we should consider the fact that she believes this highly informative. But sometimes, we might also privately know that the belief is incorrect. We can recognize that many unusual beliefs are most often absorbed unreflectively by their believer from his surroundings, from Sikhism to utilitarianism—and, at the same time, know that EAs’ unusually low credences in existential catastrophe from climate change actually do just stem from thinking harder about x-risks.
We should be somewhat more suspicious of ourselves if we find ourselves adopting the unusual EA beliefs on most or all controversial questions. What prevents universal agreement, at least in a model like that of the section above, is the fact that the distribution of beliefs in a community like EA really may be unusual on some questions for arbitrary reasons.
Even coming to agree with EA consensus on most questions is not as inevitably suspicious as it may seem, though, because the extent to which a group has come to its unusual beliefs on various questions by acquiring more information, as opposed to having unusual priors, may be correlated across the questions. For example, most unambiguously, if one is comfortable assigning an unusually high probability to the event that AI will soon kill everyone, it’s not additionally suspicious to assign an unusually high probability to the event that AI will soon kill at least a quarter of the population, or at least half. More broadly, groups may simply differ in their ability to acquire information, and it may be that a particular group’s ability on this front is difficult to determine without years of close contact.
In sum, when you see someone or some group holding an unusual belief on a given controversial question, in disagreement with others, you should update toward them if you have reason to believe that their unusual belief can be attributed more to being better informed, and less to other reasons, than one would expect on a quick look. Likewise, you should update away from them, toward everyone else, if you have reason to believe the reverse. How you want to update in light of a disagreement might depend on other circumstances too, but we can at least say that updates obeying the pattern above are unimpeachable on epistemic modesty grounds.
How to apply this pattern to any given unusual EA belief is very much a matter of judgment. One set of attempts might look something like this:
- AI-driven existential risk — Weigh the credences of people in the EA community slightly more heavily than others’. Yes, many of the people in EA concerned about AI risk were concerned before much research on the subject was done—i.e. their disagreement seems to have been driven to some extent by priors—and the high concentration of AI risk worry among later arrivals is due in part to selection, with people finding AI risk concerns intuitively plausible staying and those finding them crazy leaving. But from the inside, I think a slightly higher fraction of the prevalence of AI risk concern found in the EA community is due to updating on information than it would make sense for AI-risk-skeptics outside the EA community to expect.
- AI-driven explosive growth — Weigh the credences of people in the EA community significantly more heavily than others’. The caveats above apply a bit more weakly in this case: Kurzweil-style techno-optimism has been a fair bit more weakly represented in the early EA community than AI risk concern, and there seems to have been less selection pressure toward believing in it than in believing in AI risk. The unusually high credences that many EAs assign to the event that AI will soon drive very rapid economic growth really do seem primarily to be driven by starting with typical priors and then doing a lot of research; I know at least that they are in my case. (Also, on the other side of the coin, my own “inside knowledge” of the economics community leads me to believe that their growth forecasts are significantly less informed than you would have thought from the outside.)[9]
- Polyamory — Weigh the credences of people in the EA community less heavily than others’. From the outside, people might have thought, “those EAs seem to really think things through; if a lot of them think polyamory can work just fine in the modern age, maybe they’re ahead of the curve”. But actually, EAs don’t seem to have put any more thought than non-EAs—and arguably a fair bit less—into the question of what sort of relationship norms make for flourishing lives and communities.[10] The prevalence of polyamory can be much more straightforwardly attributed to the fact that the community selects for, say, the personality trait openness: i.e. for people with unusual priors about this kind of thing.
You may well disagree with the above attempts at applying the policy. Indeed, you probably will; it would be surprising to find that the attribution of unusual EA beliefs is difficult “from the outside”, but that the exact right way to do it is obvious to anyone who reads the EA Forum. Even if you disagree with the above attempts at an application, though, and indeed even if you think this policy still departs insufficiently from “deferring to everyone equally”, at least we have a negative result. The template of the introduction goes too far in the pursuit of epistemic modesty. We should try very hard to avoid creating echo chambers, but not to the point of modeling the ideal EA community as one pursuing atypical goals with typical beliefs. In the face of disagreement, we all have to do some thinking.
Thanks to Luis Mota for helpful comments on the post, and to David Thorstad for giving it an epistemologist's seal of approval.
- ^
One somewhat related piece I know of is Yudkowsky’s (2017) “Against Modest Epistemology”. But I would summarize its view, and how it differs from mine, as follows:
a) Something must be wrong with epistemic modesty, because it would require you to give non-negligible credence to the existence of God, or to the event that you’re as crazy as someone in a psych ward. (But I do give non-negligible credence on both counts. In any event I certainly don’t find the conclusions absurd enough to use as a reductio.)
b) The common-sense solution, which is correct, is to keep track of how reliable different people tend to be, including yourself, and give people more cred when they have better track records. (This seems reasonable enough in practice, but how does it work in theory? What I’m looking for is a more fleshed-out story of how to reconcile a procedure like this with the intuitions for modesty we may have when the disagreers also feel they’ve been keeping track, giving more reliable people more cred, and so on.)
Overviews of the philosophy literature on the epistemology of disagreement are linked in footnote 4. - ^
If we worry about whether we’re choosing the “right prior” (and not just about whether we’re processing information properly), and if what we mean by “processing information properly” is following Bayes’ Rule, then we’re endorsing Objective Bayesianism. As noted earlier, this post is written from an Objective Bayesian perspective.
- ^
To clarify: disagreers may both be rational, and have the same prior, yet lack CP or CKR.
Jack and Jill may have CKR but be drawn from a population whose members have different priors, for instance. Then even if Jack and Jill happen to have the same prior, they won’t know that about each other. They may therefore persist in disagreement, each thinking that the other’s different beliefs may not be due to the other’s access to better information (which would warrant an update) but due to the other’s different prior.
Such cases may seem less problematic than cases in which we know that one or both of the disagreers themselves are irrational or don’t share a prior. And in certain narrow cases, I believe they are less problematic. But often, a similar challenge remains. The two people in front of us may happen, in our judgment, to be rational and share a prior (though they don't have common knowledge of that fact between them); but what makes this fact not common knowledge between them is that, in some sense, they might not have done. Under these circumstances, it can be reasonable to worry that the prior this pair happens to share isn’t “the right one”, or that we ourselves are not “one of the rational people”. From here on, I’ll just refer to absences of CP/CKR as possible differences in priors or information-processing abilities, and note that they can raise theoretical issues that belief-differences attributable entirely to information-differences do not. - ^
Most of the literature I cite throughout this post is from economists, since it’s what I know best. But there is also a large, and mostly rather recent, literature on disagreement in philosophy (recent according to Frances and Matheson’s 2018 SEP article, which incidentally seems to provide a good overview). I have hardly read all 655 items tagged “Epistemology of Disagreement” on PhilPapers, so maybe it’s immodest of me to think I have anything to say; but on reading the most cited and skimming a few others, I think I can at least maintain that there’s no consensus about what to make of a belief-difference that persists in the face of common knowledge.
- ^
Technically, to guarantee that announcing posteriors back and forth produces convergence in beliefs, we require one more assumption than is required for Aumann’s theorem, namely finite information partitions. See Geanakoplos and Polemarchakis (1982).
- ^
At least outside of certain narrow cases, as noted tangentially in footnote 3, which I don’t believe are the relevant empirical cases.
- ^
This point is essentially made at greater length by Morris (1995).
- ^
In this example, if C sets a posterior of something other than 1/2, everyone who knows C’s credence will be able to infer that A’s research was informative. Everyone will therefore update all the way to 1/2 or 2/3. But this is just an artifact of how stylized the example is. If the C’s prior and the informativeness of A’s research follow continuous distributions supported everywhere, C can be known to have updated in any way without this revealing much about how informative A’s research was.
- ^
That said, the fact that some people assign high credence to AI-driven explosive growth is hardly a secret; and since this seems like it would affect how one should invest, investors have strong incentives to look into the question of why believers believe what they believe. (Say) Tom Davidson’s report on explosive growth is somewhat long, and the fact that he wasn’t a big singularitarian a few years ago is somewhat obscure, but not so long or so obscure as to account for a large, persistent belief-gap. And indeed, it seems that fewer people consider AI-driven explosive growth scenarios absurd than used to; the belief-gap has closed somewhat. But if we’re going to attribute most of the gap to a difference in information, I think we do need more of a story about why it has persisted as long as it has.
One possible answer would be that, actually, even if there is a decent chance that AI-driven explosive growth is coming, that shouldn’t change how most people invest (or live in general)—and in fact that this is obvious enough before looking into it that for most people, including most large investors, looking into it hasn’t been worth the cost.
Similarly, one could answer that a growth explosion seems improbable enough that looking into it—even to the point of looking into how its current believers came to believe in it—wasn’t worth the cost in expectation. This raises the question of why investing this hypothesis deeply enough to write long reports on it would be worth the cost at Open Phil when even skimming the reports on it wasn’t worth the cost at, say, Goldman Sachs. But maybe it was. Maybe the hypothesis is unlikely enough ex ante that only a motivation to avoid “astronomical waste” makes it worth looking into at all.
But if no story making sense of a large, persistent information-difference seems plausible, one should presumably be skeptical that it’s what accounts for much of the disagreement. And if it doesn’t, the procedure defended in this post does not justify giving EAs’ credences in explosive growth [much] more weight than others’.
The general principle here is that, to think you have an “edge” as a result of some information that it would costly for others to acquire (like the evolution of your friends’ beliefs), you have to believe that this the value of this information is smaller—ex ante, from the others’ perspective—than the costs of acquiring it. - ^
But note that you don’t have to think that EAs have put less judgment into this question than others to conclude that EAs’ credences should get less weight than others’. You only need to think that EAs have put less judgment into this question than one would have reason to expect from outside the community.
This is exciting, I'm optimistic that digesting the formalisms here will help me.
Ideally I'd like to think about obligations or burdens as it relates to epistemic diversity, primarily a group's obligation to seek out quality dissent. I've recently been deciding that echo chamber risks are actually a lot less bad than the damage I've taken from awareness of what unsophisticated critics are up to. This awareness has made me less magnanimous, enthusiastic about cooperation, etc. To what extent is it my burden to protect my attention better, and to what extent is it the critic's burden to be more sophisticated? The former seems fraught: any heuristic would most likely be an operationalization of parochial preferences or cultural baggage, or I'd have no way of being sure it's not. The latter seems fraught: it's out of my control.
This is related to the hugboxing problem, even though I think that post was about how taking unsophisticated lines of attack seriously underserves the critic, and I'm talking about how it underserves us.
Thanks, I found this very helpful for formalising and structuring how I think about the EA community's positive and negative idiosyncrasies.
Great, thanks!
Me as well! Thanks a lot!
Some interesting points re: considering how beliefs formed, but I think it proves too much.
One of the main value of being able to defer to a certain extent to the EA community is knowing that the community will often be using a process similar to you to come to a conclusion, so that you have an estimate of the conclusion you would have come up with if you had more time.
Of course, you also need to take into account the possibility of there being a deference cycle.
Sorry, I’m afraid I don’t follow on either count. What’s a claim you’re saying would follow from this post but isn’t true?
More weight on community opinions than you suggested.
Would you have a moment to come up with a precise example, like the one at the end of my “minimal solution” section, where the argument of the post would justify putting more weight on community opinions than seems warranted?
No worries if not—not every criticism has to come with its own little essay—but I for one would find that helpful!
Sorry, I’m trying to reduce the amount of time I spend on the forum.
Should that say lower, instead?
It should, thanks! Fixed
I'm a bit confused by this. Suppose that EA has a good track record on an issue where its beliefs have been unusual from the get-go. For example, I think that by temperament EAs tend to be more open to sci-fi possibilities than others, even before having thought much about them; and that over the last decade or so we've increasingly seen sci-fi possibilities arising. Then I should update towards deferring to EAs because it seems like we're in the sort of world where sci-fi possibilities happen, and it seems like others are (irrationally) dismissing these possibilities.
On a separate note: I currently don't think that epistemic deference as a concept makes sense, because defying a consensus has two effects that are often roughly the same size: it means you're more likely to be wrong, and it means you're creating more value if right.* But if so, then using deferential credences to choose actions will systematically lead you astray, because you'll neglect the correlation between likelihood of success and value of success.
Toy example: your inside view says your novel plan has 90% chance of working, and if it does it'll earn $1000; and experts think it has 10% chance of working, and if it does it'll earn $100. Suppose you place as much weight on your own worldview as experts'. Incorrect calculation: your all-things-considered credence in your plan working is 50%, your all-things-considered estimate of the value of success is $550, your all-things-considered expected value of the plan is $275. Better calculation: your worldview says that the expected value of your plan is $900, the experts think the expected value is $10, average these to get expected value of $455—much more valuable than in the incorrect calculation!
Note that in the latter calculation we never actually calculated any "all-things-considered credences". For this reason I now only express such credences with a disclaimer like "but this shouldn't be taken as action-guiding".
* A third effect which might be bigger than either of them: it motivates you to go out and try stuff, which will give you valuable skills and make you more correct in the future.
I'm defining a way of picking sides in disagreements that makes more sense than giving everyone equal weight, even from a maximally epistemically modest perspective. The way in which the policy "give EAs more weight all around, because they've got a good track record on things they've been outside the mainstream on" is criticizable on epistemic modesty grounds is that one could object, "Others can see the track record as well as you. Why do you think the right amount to update on it is more than they think the right amount is?" You can salvage a thought along these lines in a epistemic-modesty-criticism-proof way, but it would need some further story about how, say, you have some "inside information" about the fact of EAs' better track record. Does that help?
Your quote is replying to my attempt at a "gist", in the introduction--I try to spell this out a bit more further in the middle of the last section, in the bit where I say "More broadly, groups may simply differ in their ability to acquire information, and it may be that a particular group’s ability on this front is difficult to determine without years of close contact." Let me know if that bit clarifies the point.
Re
I don't follow. I get that acting on low-probability scenarios can let you get in on neglected opportunities, but you don't want to actually get the probabilities wrong, right?
In any event, maybe messing up the epistemics also makes it easier for you to spot neglected opportunities or something, and maybe this benefit sometimes kind of cancels out the cost, but this doesn't strike me as relevant to the question of whether epistemic deference as a concept makes sense. Startup founders may benefit from overconfidence, but overconfidence as a concept still makes sense.
I reject the idea that all-things-considered probabilities are "right" and inside-view probabilities are "wrong", because you should very rarely be using all-things-considered probabilities when making decisions, for reasons of simple arithmetic (as per my example). Tell me what you want to use the probability for and I'll tell you what type of probability you should be using.
You might say: look, even if you never actually use all-things-considered probabilities in the real world, at least in theory they're still normatively ideal. But I reject that too—see the Anthropic Decision Theory paper for why.
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
From my perspective it's the opposite: epistemic modesty is an incredibly strong skeptical argument (a type of argument that often gets people very confused), extreme forms of which have been popular in EA despite leading to conclusions which conflict strongly with common sense (like "in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue").
In practice, fortunately, even people who endorse strong epistemic modesty don't actually implement it, and thereby manage to still do useful things. But I haven't yet seen any supporters of epistemic modesty provide a principled way of deciding when to act on their own judgment, in defiance of the conclusions of (a large majority of) the 8 billion other people on earth.
By contrast, I think that focusing on policies rather than all-things-considered credences (which is the thing I was gesturing at with my toy example) basically dissolves the problem. I don't expect that you believe me about this, since I haven't yet written this argument up clearly (although I hope to do so soon). But in some sense I'm not claiming anything new here: I think that an individual's all-things-considered deferential credences aren't very useful for almost the exact same reason that it's not very useful to take a group of people and aggregate their beliefs into a single set of "all-people-considered" credences when trying to get them to make a group decision (at least not using naive methods; doing it using prediction markets is more reasonable).
I don't fully follow this explanation, but if it's true that defying a consensus has two effects that are the same size, doesn't that suggest you can choose any consensus-defying action because the EV is the same regardless, since the likelihood of you being wrong is ~cancelled out by the expected value of being right?
Also the "value if right" doesn't seem likely to be only modulated by the extent to which you are defying the consensus?
Example:
If you are flying a plane and considering a new way of landing a plane that goes against what 99% of pilots think is reasonable , the "value if right" might be much smaller than the negative effects of "value if wrong". It's also not clear to me that if you now decide to take an landing approach that was against what 99.9% of pilots think was reasonable you will 10x your "value if right" compared to the 99% action.
That said, thanks for sharing the Anthropic Decision Theory paper! I’ll check it out.
I appreciate the reminder that "these people have done more research" is itself a piece of information that others can update on, and that the mystery of why they haven't isn't solved. (Just to ELI5, we're assuming no secret information, right?)
I suppose this is very similar to "are you growing as a movement because you're convincing people or via selection effects" and if you know the difference you can update more confidently on how right you are (or at least how persuasive you are).
Thanks!
No actually, we’re not assuming in general that there’s no secret information. If other people think they have the same prior as you, and think you’re as rational as they are, then the mere fact that they see you disagreeing with them should be enough for them to update on. And vice-versa. So even if two people each have some secret information, there’s still something to be explained as to why they would have a persistent public disagreement. This is what makes the agreement theorem kind of surprisingly powerful.
The point I’m making here though is that you might have some “secret information” (even if it’s not spelled out very explicitly) about the extent to which you actually do have, say, a different prior from them. That particular sort of “secret information” could be enough to not make it appropriate for you to update toward each other; it could account for a persistent public disagreement. I hope that makes sense.
Agreed about the analogy to how you might have some inside knowledge about the extent to which your movement has grown because people have actually updated on the information you’ve presented them vs. just selection effects or charisma. Thanks for pointing it out!
Right, right, I think on some level this is very unintuitive, and I appreciate you helping me wrap my mind around it - even secret information is not a problem as long as people are not lying about their updates (though if all updates are secret there's obviously much less to update on)
Yup!
I found the framing of "Is this community better-informed relative to what disagreers expect?" new and useful, thank you!
To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on "EA tenets". For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it's fine that we were "unusual from the get-go" regarding this prior. (This is of course not very epistemically modest of them.)
Here are a more not-well-fleshed-out, maybe-obvious, maybe-wrong concerns with your policy:
Side-note: I found this post super hard to parse and would've appreciated it a lot if it was more clearly written!
Thanks! Glad to hear you found the framing new and useful, and sorry to hear you found it confusingly written.
On the point about "EA tenets": if you mean normative tenets, then yes, how much you want to update on others' views on that front might be different from how much you want to update on others' empirical beliefs. I think the natural dividing line here would be whether you consider normative tenets more like beliefs (in which case you update when you see others disagreeing--along the lines of this post, say) or more like preferences (in which case you don't). My own guess is that they're more like beliefs--i.e. we should take the fact that most people reject temporal impartiality as at least some evidence against longtermism--but thanks for noting that there's a distinction one might want to make here.
On the three bullet points: I agree with the worries on all counts! As you sort of note, these could be seen as difficulties with "implementing the policy" appropriately, rather than problems with the policy in the abstract, and that is how I see them. But I take the point that if an idea is hard enough to implement then there might not be much practically to be learned from it.