Epistemic deference

Applied to Independent impressions by MichaelA at 1mo

Aird, Michael (2020) Collection of discussions of epistemic modesty, "rationalist/“rationalist*EA exceptionalism"exceptionalism”, and similar, LessWrong, March 30.
Many additional resources on this topic or related topics.

LessWrong (2021) Modest epistemology, LessWrong Wiki., April 7.

Learning that some person or group of people hold certain views may sometimes provide valid grounds for epistemic deference, that is, for updating our own beliefs in response to what others appear to believe, even if we ignore the reasons for those beliefs or don'tdo not find those reasons persuasive. The question of when, how, and to what extent a rational agent should defer to others has been the studied—from somewhat different angles—by philosophers working in social epistemology and by economists working in game theory.

Learning that some person or group of people hold certain views may sometimes provide valid grounds for epistemic deference, that is, for updating our own beliefs in response to what others appear to believe, even if we ignore the reasons for those beliefs or don't find those reasons persuasive. The deference and social epistemology tag is for posts relevant toquestion of when, how, and to what extent wea rational agent should update our beliefs based on whatdefer to others appear to believe - even if we don't knowhas been the reasons underlying those others' beliefs, or even if we don't find those reasons compelling. (The term "epistemic modesty" is often usedstudied—from somewhat different angles—by philosophers working in this context.) This tag can also be used for other broadly related topics, such as social epistemology, how to balance inside- and outside-views, or how to share information and conclusions efficiently without causing information cascades or anchoring people.by economists working in game theory.

Deference is one part of social epistemology, and potentially it's slightly inelegant to have a tag for the conjunction of a topic and a proper part of that topic. So this tag could perhaps be split into two: "epistemic deference" and "social epistemology". Or you could just have an "epistemic deference" tag, in case there isn't enough stuff to go under the "social epistemology" tag.

If you go for having a separate deference tag, I think it's better to call it "epistemic deference", since the word "deference" can also be used in other, non-epistemic senses.

How to balance inside vs outside views doesn't seem to be part of social epistemology, strictly speaking, since the problem arises for individuals as well.

4Pablo5moThanks. I agree that the current name is not fully satisfactory. I just spent some time looking into the social epistemology literature, and I think the only part of it of sufficient relevance to cover on the Wiki is the part concerned with the epistemology of disagreement. The other main branches, namely the attribution of epistemic states to collective agents and the status of belief based on testimony, seem significantly less relevant. On the other hand, the body of the entry mentions various other phenomena of interest, such as balancing inside and outside views, and avoiding anchoring and information cascades. Insofar as we want to cover these topics, I think we should do so in separate articles, rather than amalgamating them all in a single entry without a crisp focus. In light of this, my proposal is to rename this article epistemic deference and incorporate into it any relevant content from the epistemic disagreement [https://forum.effectivealtruism.org/tag/epistemic-disagreement] article (which was imported from EA Concepts). We already have an article on inside vs. outside view [https://forum.effectivealtruism.org/tag/inside-vs-outside-view], and articles on information cascade and anchoring could be created, if needed. Let me know if you are happy with these changes. (This is addressed not only to Stefan, but to anyone reading this comment.)
4Stefan_Schubert5moThanks, that sounds good.
4Pablo5moOkay, I changed the title, updated the entry, and deleted the other entry. I'm not sure what to do with that content, so I'm copying it below for the time being. -- Sometimes we find that we disagree with smart, well-informed people about how the world is or ought to be, even if those people seem to share our prior beliefs [https://concepts.effectivealtruism.org/concepts/bayes-rule], values [https://concepts.effectivealtruism.org/concepts/theories-of-value], our evidence [https://concepts.effectivealtruism.org/concepts/evidence]. If we think that there is a restricted range of credences that are rational on the same prior beliefs and evidence, then disagreement with others is evidence that at least one of us has made a mistake in our reasoning (White 2005). This is not so surprising in cases where the evidence is complex and difficult to assess. What should we do when we encounter this kind of disagreement? The responses to this problem range from more “conciliatory” responses, which say that we ought to move closer to the views of our peers in cases of disagreement, to less conciliatory responses, which say that we ought to stick to our guns if we haven't made a mistake. More conciliatory responses, such as the view that we should treat ourselves and others like truth thermometers (White 2009), offer more actionable advice than radically non-conciliatory views. The latter say that you should stick with your own views if you are right and move to your interlocutor's views if they are right, even though you don't know which of you is right. And sometimes we want to treat the testimony of others as evidence, even if we don't have access to their reasoning. Results like Condorcet's jury theorem (Wikpedia 2005) suggest that if many independent agents converge on the same answer to a question, we should treat that as good evidence that the answer they have converged on is correct. BIBLIOGRAPHY Goldman, Alvin & Cailin O’Connor (2019) Peer disagreement [https://plato

Further readingBibliography

LessWrong (2021) Modest epistemology, LessWrong Wiki.