*[Written September 02, 2022. Note: I'm likely to not respond to comments promptly.]*

Sometimes people defer to other people, e.g. by believing what they say, by following orders, or by adopting intents or stances. In many cases it makes sense to defer, since other people know more than you about many things, and it's useful to share eyes and ears, and coordination and specialization are valuable, and one can "inquisitively defer" to opinions by taking them as challenges to investigate further by trying them out for oneself. But there are major issues with deferring, among which are:

* Deferral-based opinions don't contain the detailed content that generated the opinions, and therefore can't direct action effectively or update on new evidence correctly. 
* Acting based on deferral-based opinions is discouraging because it's especially not the case that the whole of you can see why the action is good. 
* Acting based on deferral-based opinions to some extent removes the "meaning" of learning new information; if you're just going to defer anyway, it's sort of irrelevant to gain information, and your brain can kind of tell that, so you don't seek information as much. Deference therefore constricts the influx of new information to individuals and groups.
* A group with many people deferring to others will amplify [information cascades](https://en.wikipedia.org/wiki/Information_cascade) by double-triple-quadruple-counting non-deferral-based evidence.
* A group with many people deferring to others will have mistakenly correlated beliefs and actions, and so will fail to explore many worthwhile possibilities.
* The deferrer will copy beliefs mistakenly imputed to the deferred-to that would have explained the deferred-to's externally visible behavior. This pushes in the direction opposite to science because science is the way of making beliefs come apart from their pre-theoretical pragmatic implications.
* Sometimes the deferrer, instead of imputing beliefs to the deferred-to and adopting those beliefs, will adopt the same model-free behavioral stance that the deferred-to has adopted to perform to onlookers, such as pretending to believe something while acting towards no coherent purpose other than to maintain the pretense. 
* If the deferred-to takes actions for PR reasons, e.g. attempting to appear from the outside to hold some belief or intent that they don't actually hold, then the PR might work on the deferrer so that the deferrer systematically adopts the false beliefs and non-held intents performed by the deferred-to (rather than adopting beliefs and intents that would actually explain the deferred-to's actions as part of a coherent worldview and strategy).
* Allocating resources based on deferral-based opinions potentially opens up niches for non-epistemic processes, such as hype, fraud, and power-grabbing.
* These dynamics will be amplified when people choose who to defer to according to how much the person is already being deferred to. 
* To the extent that these dynamics increase the general orientation of deference itself, deference recursively amplifies itself.

Together, these dynamics make it so that deferral-based opinions are under strong pressure to not function as actual beliefs that can be used to make successful plans and can be ongoingly updated to track reality. So I recommend that people

* keep these dynamics in mind when deferring, 
* track the difference between believing someone's testimony vs. deferring to beliefs imputed to someone based on their actions vs. adopting non-belief performative stances, and 
* give substantial parliamentary decision-weight to the recommendations made by their expectations about facts-on-the-ground that they can see with their own eyes.

Not to throw away arguments or information from other people, or to avoid investigating important-if-true claims, but to *think as though thinking matters*. 
 

46

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since: Today at 12:50 AM

Just put it in simple language:

 

Imagine a forecasting tournament where virtually no one submitted indepedent predictions, but everyone just copied the forecasts of the individual that everyone thought at the beginning was already the best forecaster . Obviously the tournament will  just generate fairly useless outputs unless it so happens that the so-called "best forecaster" actually is really good and dialled-in  and maybe a bit lucky as well. 

Epistemic deference is just obviously parasitic, a sort of dreadful tragedy of the commons of the mind. Take a walk on the wild side. Don't be afraid to be wrong!

Epistemic deference is just obviously parasitic, a sort of dreadful tragedy of the commons of the mind.


I don't think this is right. One has to defer quite a lot; what we do is, appropriately, mostly deferring, in one way or another. The world is so complicated, there's so much information to process, and our problems are high-context (that is, require compressing and abstracting from a lot of information). Also coordination is important. 

I think a blunt-force "just defer less" is therefore not viable. Instead, having a more detailed understanding of what's undesirable about specific cases of deference opens the possibility of more specifically deferring less when it's most undesirable, and alleviating those dangers.
 

I agree that "defer less" might not be viable advice for the median human, or even bad advice, but for the median EA I think it's pretty good advice.

Deference should imo be explicitly temporary and provisional. "I will outsource to X until such time as I can develop my own opinion" is not always a bad move, and might well be a good one in some contexts, but you do actually need to develop your own takes on the things that matter if you want to make any useful contributions to anything ever. 

I agree that "defer less" is good advice for EAs, but that's because EAs are especially deferent, and also especially care about getting things right and might actually do something sane about it. I think part of doing something sane about it is to have a detailed model of deference. 

If someone said "I am not going to wear masks because I am not going to defer to expert opinions on epidemiology of COVID19" how would someone taking the advice of this article respond to that?

Overall, being a noob, I found the language in this article difficult to read. So, I am giving you a specific scenario that many people can relate to and then trying to learn what you are saying from that.

I think the usefulness of deferring also depends on how established a given field is, how many people are experts in that field, and how certain they are of their beliefs. 

If a field has 10,000+ experts that are 95%+ certain of their claims on average,  then it probably makes sense to defer as a default. (This would be the case for many medical claims, such as wearing masks, vaccinations, etc.)  If a field has 100 experts and they are more like 60% certain of their claims on average, then it makes sense to explore the available evidence yourself or at least keep in mind that there is no strong expert consensus when you are sharing information. 

We can't know everything about every field, and it's not reasonable to expect everyone to look deeply into the arguments for every topic. But I think there can be a tendency of EAs to defer on topics where there is little expert consensus, lots of robust debate among knowledgeable people, and high levels of uncertainty (eg. many areas of AI safety). While not everyone has the time to explore AI safety arguments for themselves, it's helpful to keep in mind that, for the most part, there isn't a consensus among experts (yet), and many people who are very knowledgeable about this field still carry high levels of uncertainty about their claims. 

Ah! That makes sense.

I agree that the EA thing to do would be to work on and explore cause areas by oneself instead of just blindly relying on 80k hours cause areas or something like that.

Curated and popular this week
Recent opportunities in Building effective altruism