*[Written September 02, 2022. Note: I'm likely to not respond to comments promptly.]*
Sometimes people defer to other people, e.g. by believing what they say, by following orders, or by adopting intents or stances. In many cases it makes sense to defer, since other people know more than you about many things, and it's useful to share eyes and ears, and coordination and specialization are valuable, and one can "inquisitively defer" to opinions by taking them as challenges to investigate further by trying them out for oneself. But there are major issues with deferring, among which are:
* Deferral-based opinions don't contain the detailed content that generated the opinions, and therefore can't direct action effectively or update on new evidence correctly.
* Acting based on deferral-based opinions is discouraging because it's especially not the case that the whole of you can see why the action is good.
* Acting based on deferral-based opinions to some extent removes the "meaning" of learning new information; if you're just going to defer anyway, it's sort of irrelevant to gain information, and your brain can kind of tell that, so you don't seek information as much. Deference therefore constricts the influx of new information to individuals and groups.
* A group with many people deferring to others will amplify [information cascades](https://en.wikipedia.org/wiki/Information_cascade) by double-triple-quadruple-counting non-deferral-based evidence.
* A group with many people deferring to others will have mistakenly correlated beliefs and actions, and so will fail to explore many worthwhile possibilities.
* The deferrer will copy beliefs mistakenly imputed to the deferred-to that would have explained the deferred-to's externally visible behavior. This pushes in the direction opposite to science because science is the way of making beliefs come apart from their pre-theoretical pragmatic implications.
* Sometimes the deferrer, instead of imputing beliefs to the deferred-to and adopting those beliefs, will adopt the same model-free behavioral stance that the deferred-to has adopted to perform to onlookers, such as pretending to believe something while acting towards no coherent purpose other than to maintain the pretense.
* If the deferred-to takes actions for PR reasons, e.g. attempting to appear from the outside to hold some belief or intent that they don't actually hold, then the PR might work on the deferrer so that the deferrer systematically adopts the false beliefs and non-held intents performed by the deferred-to (rather than adopting beliefs and intents that would actually explain the deferred-to's actions as part of a coherent worldview and strategy).
* Allocating resources based on deferral-based opinions potentially opens up niches for non-epistemic processes, such as hype, fraud, and power-grabbing.
* These dynamics will be amplified when people choose who to defer to according to how much the person is already being deferred to.
* To the extent that these dynamics increase the general orientation of deference itself, deference recursively amplifies itself.
Together, these dynamics make it so that deferral-based opinions are under strong pressure to not function as actual beliefs that can be used to make successful plans and can be ongoingly updated to track reality. So I recommend that people
* keep these dynamics in mind when deferring,
* track the difference between believing someone's testimony vs. deferring to beliefs imputed to someone based on their actions vs. adopting non-belief performative stances, and
* give substantial parliamentary decision-weight to the recommendations made by their expectations about facts-on-the-ground that they can see with their own eyes.
Not to throw away arguments or information from other people, or to avoid investigating important-if-true claims, but to *think as though thinking matters*.
I think the usefulness of deferring also depends on how established a given field is, how many people are experts in that field, and how certain they are of their beliefs.
If a field has 10,000+ experts that are 95%+ certain of their claims on average, then it probably makes sense to defer as a default. (This would be the case for many medical claims, such as wearing masks, vaccinations, etc.) If a field has 100 experts and they are more like 60% certain of their claims on average, then it makes sense to explore the available evidence yourself or at least keep in mind that there is no strong expert consensus when you are sharing information.
We can't know everything about every field, and it's not reasonable to expect everyone to look deeply into the arguments for every topic. But I think there can be a tendency of EAs to defer on topics where there is little expert consensus, lots of robust debate among knowledgeable people, and high levels of uncertainty (eg. many areas of AI safety). While not everyone has the time to explore AI safety arguments for themselves, it's helpful to keep in mind that, for the most part, there isn't a consensus among experts (yet), and many people who are very knowledgeable about this field still carry high levels of uncertainty about their claims.