Hide table of contents

Most people in the knowledge producing industry in academia, foundations, media or think tanks are not Bayesians. This makes it difficult to know how Bayesians should go about deferring to experts. 

Many experts are guided by what Bryan Caplan has called ‘myopic empiricism’, also sometimes called scientism. That is, they are guided disproportionately by what the published scientific evidence on a topic says, and less so by theory, common sense, scientific evidence from related domains, and other forms of evidence. The problem with this is that, for various reasons, standards in published science are not very high, as the replication crisis across psychology, empirical economics, medicine and other fields has illustrated. Much published scientific evidence is focused on the discovery of statistically significant results, which is not what we ultimately care about, from a Bayesian point of view. Researcher degrees of freedom, reporting bias and other factors also create major risks of bias.

Moreover, published scientific evidence is not the only thing that should determine our beliefs.

1. Examples

I will now discuss some examples where the experts have taken views which are heavily influenced by myopic empiricism, and so their conclusions can come apart from what an informed Bayesian would say. 

Scepticism about the efficacy of masks

Leading public health bodies claimed that masks didn’t work to stop the spread at the start of the pandemic.1 This was in part because there were observational studies finding no effect (concerns about risk compensation and reserving supplies for medical personnel were also a factor).2 But everyone also agrees that COVID-19 spreads by droplets released from the mouth or nose when an infected person coughs, sneezes, or speaks. If you put a mask in the way of these droplets, your strong prior should be that doing so would reduce the spread of covid. There are videos of masks doing the blocking. This should lead one to suspect that the published scientific research finding no effect is mistaken, as has been confirmed by subsequent research.

Scepticism about the efficacy of lockdowns

Some intelligent people are sceptical not only about whether lockdowns pass the cost-benefit analysis, but even about whether lockdowns reduce the incidence of covid. Indeed, there are various published scientific papers suggesting that such measures have no effect.3 One issue such social science studies will have is that the severity of a covid outbreak is positively correlated with the strength of the lockdown measures, so it will be difficult to tease out cause and effect. This is especially in cross-country regressions where the sample size isn’t that big and there are dozens of other important factors at play that will be difficult or impossible to properly control for. 

As for masks, given our knowledge of how covid spreads, on priors it would be extremely surprising if lockdowns don’t work. If you stop people from going to a crowded pub, this clearly reduces the chance that covid will pass from person to person. Unless we want to give up on the germ theory of disease, we should have an extremely strong presumption that lockdowns work. This means an extremely strong presumption that most of the social science finding a negative result is false. 

Scepticism about first doses first

In January, the British government decided to implement ‘first doses first’ - an approach of first giving out as many first doses of the vaccine as possible before giving out second doses. This means leaving a longer gap between the two doses - from 12 weeks rather than 21 days. However, the 21 day gap was what was tested in the clinical trial of the Oxford/AstraZeneca vaccine. As a result, we don’t know from the trial whether spacing out the doses has a dramatic effect on the efficacy of the vaccine. This has led expert groups, such as the British Medical Association and the WHO, to oppose the UK’s strategy. 

But again, on the basis of our knowledge of how other vaccines work and of the immune system, it would be very surprising if the immune system response did decline substantially when the gap between the doses is increased. To borrow an example from Rob Wiblin, the trial also didn’t test whether people turned into a unicorn two years after receiving the vaccine, but that doesn’t mean we should be agnostic about that possibility. Subsequent evidence has confirmed that the immune response doesn’t drop off after the longer delay. 

Scepticism about whether the AstraZeneca works on the over 65s

Almost half of all EU countries have forbidden the use of the Oxford/AstraZeneca vaccines on the over 65s. This was in part because the sample of over 65s in the initial study was not conclusive enough to form a judgement on efficacy for that age group. But there was evidence from the study that the AZ vaccine is highly effective for under 65s. Given what we know about similarities across the human immune system, it is very unlikely that the AZ vaccine has 80% efficacy for under 65s but drops off precipitously for over 65s. To make useful judgments about empirical research, one has to make some judgments about external validity, which need to be informed by non-study considerations. The myopic empiricist approach often neglects this fact.

Denying that the minimum wage affects demand for labour

Prior to the 1990s, almost all economists believed that the minimum wage would cause firms to economise on labour either by causing unemployment or reducing hours. This was on the basis of Economics 101 theory - if you increase the price of something, then demand for it falls. In the 1990s, some prominent observational studies found that the minimum wage did not in fact have these effects, which has caused much more sanguine views about the employment effects of the minimum wage among some economists. See this recent IGM poll of economists on the employment effects of a minimum wage increase in the US - many respondents appeal to the empirical evidence on the minimum wage when justifying their conclusion.

“An increase to $15/hour is a big jump, and I'm not sure we have the data to know what the effect on employment would be.”

“Evidence is that small increases in min. wage (starting from US lows) don't have large disemployment effects. Don't know what $15 will do”

“The weight of the evidence does not support large job loss. But I'm above extra nervous about setting min $15/hr during the pandemic.”

“Research has shown modest min. wage increases do not increase unemployment. But going from $6 to $15 in the current situation is not modest.”

“Evidence on employment effects of minimum wages is inconclusive, and the employment losses may well be small.”

A lone economist - Carl Shapiro - digs his heels in and sticks to theory

“Demand for labor is presumably downward sloping, but the question does not ask anything about magnitudes.”

As Bryan Caplan argues, there are several problems with the myopic empiricist approach: 

  1. There are strong grounds from theory to think that any randomly selected demand curve will slope downward. The only way minimum wages wouldn’t cause firms to economise on labour is if they were a monopsonistic buyer of labour, which just doesn’t seem to be the case.
  2. The observational evidence is very mixed and is almost always testing very small treatment effects - increases in the minimum wage of a few dollars. Given the quality of observational studies, we should expect a high number of false negatives if enough studies are conducted. 
  3. The literature on the impact of immigration on the wages of native workers suggests that the demand curve for labour is highly elastic, which is strongly inconsistent with the view that minimum wages don’t damage employment. 
  4. Most economists agree that many European countries have high unemployment due to regulations that increase the cost of hiring workers - minimum wages are one way to increase the costs of hiring workers. 
  5. Keynesians think that unemployment is sometimes caused by nominal downward wage rigidity, i.e. that nominal wages fail to fall until the market clears. This view is very hard to reconcile with the view that the minimum wage doesn’t cause firms to economise on labour. 

Scepticism about the effects of mild drinking while pregnant

There are a lot of observational studies that show pretty conclusively that moderate or severe drinking while pregnant is extremely bad for babies: doctors will try to get pregnant alcoholic drug addicts off alcohol before getting them off heroin. However, observational studies have struggled to find an effect of mild drinking on birth outcomes, which has led some experts to argue that mild drinking in pregnancy is in fact safe.4 

Given what we know about the effects of moderate drinking while pregnant, and theoretical knowledge about the mechanism of how it has this effect, we should have a very strong presumption that mild drinking is also mildly bad for babies. The reason observational studies struggle to find an effect is that they are searching for an effect that is too close to zero to distinguish the signal from the noise, and there are million potential confounders. The effect is still very likely there. 

Nutritional epidemiology

Nutritional epidemiology tries to tease out the effect of different foods on health. It has produced some extraordinary claims. John Ioannidis describes the effect of different foods found in meta-analyses 

“the emerging picture of nutritional epidemiology is difficult to reconcile with good scientific principles. The field needs radical reform… Assuming the meta-analyzed evidence from cohort studies represents life span–long causal associations, for a baseline life expectancy of 80 years, eating 12 hazelnuts daily (1 oz) would prolong life by 12 years (ie, 1 year per hazelnut), drinking 3 cups of coffee daily would achieve a similar gain of 12 extra years, and eating a single mandarin orange daily (80 g) would add 5 years of life. Conversely, consuming 1 egg daily would reduce life expectancy by 6 years, and eating 2 slices of bacon (30g) daily would shorten life by a decade, an effect worse than smoking.”

As Ioannidis notes: “These implausible estimates of benefits or risks associated with diet probably reflect almost exclusively the magnitude of the cumulative biases in this type of research, with extensive residual confounding and selective reporting”. Sticking to an enlightened common sense prior on nutrition is probably a better bet. 

2. Implications for deference

I have outlined some cases above where hewing to the views of certain experts seems likely to lead one to mistaken beliefs. In these cases, taking account of theory, common sense and evidence from other domains leads one to a different view on crucial public policy questions. This suggests that, for Bayesians, a good strategy would be to defer to the experts on what the published scientific evidence says, and let this be one input into one’s all-things-considered judgement about a topic. 

For example, we might accept that some studies find limited effects of masks but also discard that evidence given our other knowledge. 

Many subject matter experts are not experts on epistemology - on whether Bayesianism is true. So, this approach does not obviously violate epistemic modesty.

 

Endnotes

1.  For an overview of the changing guidance, see this Unherd article by Stuart Ritchie.

2.  For an overview see Greenhalgh. For example, “A preprint of a systematic review published on 6 April 2020 examined whether wearing a face mask or other barrier (goggles, shield, veil) prevents transmission of respiratory illness such as coronavirus, rhinovirus, tuberculosis, or influenza.11 It identified 31 eligible studies, including 12 randomised controlled trials. The authors found that overall, mask wearing both in general and by infected members within households seemed to produce small but statistically non-significant reductions in infection rates. The authors concluded that “The evidence is not sufficiently strong to support the widespread use of facemasks as a protective measure against covid-19”11 and recommended further high quality randomised controlled trials.” Trisha Greenhalgh et al., ‘Face Masks for the Public during the Covid-19 Crisis’, BMJ 369 (9 April 2020): m1435, https://doi.org/10.1136/bmj.m1435.

3.  For an overview of the sceptical literature, see this website

4.  For example, Emily Oster, an economist and author of the popular book Expecting Better argues that there is “little evidence” that one to two drinks per week causes harm to the foetus, as discussed in this Vox piece.

101

0
0

Reactions

0
0

More posts like this

Comments29
Sorted by Click to highlight new comments since: Today at 6:40 AM
AGB
3y28
0
0

A quibble on the masks point because it annoys me every time it's brought up. As you say, it's pretty easy to work out that masks stop an infected person from projecting nearly as many droplets into the air when they sneeze, cough, or speak, study or no study. But virtually every public health recommendation that was rounded off as 'masks don't work' did in fact recommend that infected people should wear masks. For example, the WHO advice that the Unherd article links to says:

Among the general public, persons with respiratory symptoms or those caring for COVID-19 patients at home should receive medical masks

Similarly, here's the actual full statement from Whitty in the UK:

Prof Whitty said: “In terms of wearing a mask, our advice is clear: that wearing a mask if you don’t have an infection reduces the risk almost not at all. So we do not advise that.”

“The only people we do sometimes use masks for are people who have got an infection and that is to help them to stop it spreading around," he added.

As for the US, here's Scott Alexander's summary of the debate in March:

As far as I can tell, both sides agree on some points.

They agree that N95 respirators, when properly used by trained professionals, help prevent the wearer from getting infected.

They agree that surgical masks help prevent sick people from infecting others. Since many sick people don’t know they are sick, in an ideal world with unlimited mask supplies everyone would wear surgical masks just to prevent themselves from spreading disease.

So 'the experts' did acknowledge, often quite explicitly, that masks should stop infected people spreading the infection, as the video and just plain common sense would suggest. 

This is mostly a quibble because I think it's pretty plausible you know this and I do agree that the downstream messaging was pretty bad, it was mostly rounded off to 'masks don't work', and there was a strong tendency to double-down on that as it became contentious, as opposed to (takes deep breath) 'Masks are almost certainly worthwhile for infected people, and we don't really know much of anything for asymptomatic people but supplies are limited so maybe they aren't the priority right now but could be worth it very soon.'. Admittedly the first is much more pithy. 

But it's not entirely frivolous either; I've had many conversations with people (on both sides) over the past several months who appeared to be genuinely unaware that the WHO et al. were recommending infected people wore masks, they just blindly assumed that the media messaging matched what the experts were saying at the time. So I suggest that regardless of one's thoughts on myopic empiricism, capabilities of experts, etc., one easy improvement when trying to defer to experts is to go and read what the experts are actually saying, rather than expecting a click-chasing headline writer to have accurately summarised it for you.

I realize that this is kind of a tangent to your tangent, but I don't think the general conjunction of  (Western) expert views  in 2020 was particularly defensible. Roughly speaking, the views  (that I still sometimes hear it parroted by Twitter folks) were something like

  1. For most respiratory epidemics, (surgical) masks are effective at protecting wearers in medical settings.
  2. They are also effective as a form of source control in medical settings.
  3. They should be effective as a form of source control in community transmission.
  4. However, there is insufficient evidence to determine whether they are useful to protect wearers in community transmission.

I think each of those beliefs may [1] be reasonable by themselves in the abstract, but the conjunction together is extremely suspicious. The policy prescriptions are likewise suspicious.

Thus, I think Halstead's evidence in that section can be modified fairly trivially to still preserve the core of that argument.

[1] Personally, my view on this is that if masks were a newfangled technology, the empirical beliefs (though not necessarily the logic that led to holding them together) may be forgivable coming from our experts. But 109+ years is also a long time to get something this important wrong. FWIW, I didn't have a strong opinion on masks for community transmission in 2019, so it's not like I got this particularly early. But I like to imagine that if any of the commentators here were to be an expert actively studying this, it would have taken most of them less than a century to figure this out. 

AGB
3y11
0
0

I mostly agree with this. Of course, to notice that you have to know (2)/(3) are part of the ‘expert belief set’, or at least it really helps, which you easily might not have done if you relied on Twitter/Facebook/headlines for your sense of ‘expert views’.

And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.

In other words, even if you go and read the expert view directly and decide it doesn’t make sense, I expect you to end up in a better epistemic position than you would otherwise be; it’s useful for both deference and anti-deference, and imo will strongly tend to push you in the ‘right’ direction for the matter at hand.

Edit: Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’. I suspect you don’t disagree, but stating it explicitly because leaning too hard on that in a lot of different areas is one of the larger factors leading me to be unhappy with the current state of EA discourse.

Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like "do the homework" (eg, read original sources, don't be sloppy with the statistics, read original papers, don't just read the abstract or press release, read the original 2-sentence quote before taking somebody else's 1-sentence summary at face value, etc), and time/attention/laziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way). 

I certainly think it's unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I'm a bit sad that our best solution here appears to be blaming user error.

Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’

I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P 

 

I certainly think it's unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I'm a bit sad that our best solution here appears to be blaming user error.

Yeah, I think this seems true and important to me too. 

There are three, somewhat overlapping solutions to small parts of this problem that I'm excited about: (1) "Research Distillation" to pay off "Research Debt", (2) more summaries, and (3) more collections.

And I think we can also broaden the idea of "research distillation" to distilling bodies of knowledge other than just "research", like sets of reasonable-seeming arguments and considerations various people have highlighted.

I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I'm spending some time helping with it lately.

And I think "argument mapping" type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I've never actually used that myself.)

There was also a relevant EAG panel discussion a few years ago: Aggregating knowledge | Panel | EA Global: San Francisco 2016.

I have a lot of sympathy for comments in the general vein of "some people have unusually high accuracy relative to other high status people, this accuracy is ex ante predictable, and this accuracy has nontrivial and positive correlation with (without loss of generality) our friends."

However, I'm pretty confused by what you mean by "Bayesian." Presumably you mean something generally stronger (and  also different) than "knows the existence of Bayes theorem" and weaker than "agrees on all epistemological points with Internet rationalists."[1] 

But I think it's easy to artificially redraw boundaries such that good results/thinking that you like is defined ex post as "Bayesian", and bad results/thinking that you dislike is defined as "non-Bayesian."

[1] Another way you might mean "Bayesian" might be something like "explicitly makes Bayesian updates for all questions." However I suspect most people we trust (eg the majority of superforecasters) do not do this. 

[anonymous]3y2
0
0

Hi,  thanks for this. 

I'm not making a claim that rationalists are more accurate than the standard experts. I actually don't think that is true .eg rationalists think you obviously should one-box in Newcomb's problem (which I think is wrong, as do most decision theorists). The comments of Greg Lewis' post discuss the track record of the rationalists, and I largely agree with the pessimistic view there. I also largely agree with the direction and spirit of Greg's main post.

My post is about what someone who accepts  the tenets of Bayesianism would do given the beliefs of experts. In the examples I mention, some experts have gone wrong by not taking account of their prior when forming beliefs (though there are other ways to fall short of the Bayesian standard, such as not updating properly given a prior). I think this flaw has been extremely socially damaging during the pandemic.

 I don't think this implies anything about deferring to the views of actual rationalists, which would require a sober assessment of their track record. 

I also largely agree with the direction and spirit of Greg's main post.

Personally, I broadly agreed with the spirit of the post before 2020. I'm somewhat more reticent now. But this is maybe a distraction so let's set it aside for now. 

At a high level, I think I agree with the core of your argument. However, some of the subcomponents/implications seem "slippery" to me. In particular, I think readers (or at least, one particularly thick-skulled reader that I irrational have undue concern about) may read into it connotations that are potentially quite confusing/misleading.

I'll first try to restate the denotations of your post in my own words, so you can tell me where (if) I already went wrong.

  1. Assume that someone espouses the tenets of philosophical Bayesianism
  2. Many other people in the world either do not espouse Bayesianism, or do not follow it deeply, or both.
  3. A subset of the group above include some (most) people whom we in 2021 secular Western culture consider to be domain experts.
  4. With equal evidence, an epistemic process that does not follow ideal Bayesian reasoning will be less truth-tracking than ideal Bayesian reasoning (assuming, again, that philosophical Bayesianism is correct).
  5. Epistemic processes that are not explicitly Bayesian are on average worse at approximating ideal Bayesian reasoning than epistemic processes that are explicitly Bayesian.
  6. From 3 and 4/5, we can gather that non-Bayesian domain experts will on average fall short of both the Bayesian ideal and the Bayesian practice (all else being equal).
  7. {Examples where, to a Bayesian reasoner, it appears that people with non-Bayesian methods fall short of Bayesian reasoning}
  8. Thus, Bayesian reasoners should not defer unconditionally to non-Bayesian experts (assuming Bayesian reasoning is correct).
  9. Non-Bayesian experts are systematically flawed in ways that are ex ante predictable. (In statistical jargon, this is bias, not just high variance/noise).
  10. One of the ways in which Non-Bayesian experts are systematically biased is in an over-reliance on naive scientism/myopic empiricism.
  11. Thus, Bayesian reasoners who adjust for this bias when deferring to non-Bayesian experts will come to more accurate conclusions than Bayesian reasoners who do not.

Assuming I read your post correctly, I agree with both of the above bolded claims (which I assume to be the central points of your article). I also think I agree with the (importantly non-trivial!) examples you give, barring caveats like AGB's. However, I think many readers (or again, perhaps just one) reading this may take implied connotations that perhaps you did not intend, some of which are wrong.

  1. People who call themselves "Bayesians" are on average systematically better at reasoning than people who do not, all else being equal.
  2. People who call themselves "Bayesians" are on average systematically better at reasoning than people who do not.
    1. (I think we already established upthread that you do not believe this).
  3. Naive scientism is the most important bias facing non-Bayesian scientists today.
  4. Naive scientism is the most important bias to adjust for when interacting or learning from non-Bayesian scientists.
  5. When trying to interpret claims from scientists, the primary thing that a self-professed Bayesian should watch out for is naive scientism.
  6. Scientists will come to much more accurate beliefs if they switched to Bayesian methods.
  7. Scientists will come to much more accurate beliefs if they switched to philosophical Bayesianism.
    1. (Perhaps you meant this forum post as a conditional, so we can adjust the above point/points with the caveat "assuming philosophical Bayesianism is true/useful")

At the risk of fighting a strawman, when I think about the problems in reasoning/epistemics, either in general, or during the pandemic, I do not think naive scientism is the most obvious, or most egregious, mistake. 

  • The epistemic failure modes involved with deferring to the wrong experts is itself not a mistake of naive scientism
    • Somebody who thinks "I should defer to the US CDC on masks (or Sweden on lockdowns, etc) but not China, Japan, or South Korea" is not committing this error because they had good global priors on deference but choose to ignore their priors in favor of an RCT saying that US residents who deferred to the US gov't had systematically better outcomes than US residents who deferred to other health authorities.
      • If anything, most likely this question did not come to mind at all.
  • I think most mistakes of deference are due to practical issues of the deferrer or situation rather than because the experts were wrong.
  • To the extent experts were wrong, I'm pretty suss of stories that this is primarily due to naive scientism (though I agree that this is a nontrivial and large bias):
    • Many(most?) early pandemic expert surveys underperformed both simple linear extrapolation and simple SEIR models for covid.
    • Expert Political Judgment also demonstrates political science experts robustly underpredicting simple algorithms, in an environment where more straightforward studies cannot be conducted
    • Clinicians often advocate for "real world evidence" over RCTs. But I don't think clinical intuitions robustly outperform RCT evidence in predicting the results of replications
      • (though it's perhaps telling that I don't have a citation on hand on this)
  • To the extent scientific experts are wrong in ex ante predictable ways, here's a short candidate list of ways that I think are potentially more important than naive scientism (note that of course a lot of this is field, topic, individual, timing, and otherwise context dependent):
    • motivated reasoning
    • publication bias
    • garden-of-forking paths
      • edit 2021/07/13: I misunderstood what "garden of forking paths" is when I made the comment originally. Since then I've read the paper and by a happy coincidence it turns out that "garden of forking paths" is still large enough of a problem to belong here. But sort of Gettier problem. 
    • actually being bad at statistics
    • blatant lies

Anyway, I realize this comment is way too long for something that's effectively saying "80% in agreement." I just wanted a place to write down my thoughts. Perhaps it can be helpful to at least one other person.

Philosopher Michael Strevens argues that what you're calling myopic empiricism is what has made science so effective: https://aeon.co/essays/an-irrational-constraint-is-the-motivating-force-in-modern-science

I'd like to see more deference to the evidence as you say, which isn't the same as to the experts themselves, but more deference to either theory or common sense is an invitation for motivated reasoning and for most people in most contexts would, I suspect, be a step backward.

But ultimately I'd like to see this question solved by some myopic empiricism! We know experts are not great forecasters; we know they're pretty good at knowing which studies in their fields will or not replicate. More experiments like that are what will tell us how much to defer to them.

[anonymous]3y6
0
0

Thanks for sharing that piece, it's a great counterpoint. I have a few thoughts in response. 

Strevens argues that myopic empiricism drives people to do useful experiments which they perhaps might not have done if they stuck to theory. This seems to have been true in the case of physics. However, there are also a mountain of cases of wasted research effort, some of them discussed in my post. The value of information from eg  most studies on the minimum wage and observational nutritional epidemiology is miniscule in my opinion. Indeed, it's plausible that the majority of social science research is wasted money, per the claims of the meta-science movement. 

I agree that it's not totally clear if it would be positive if in general people tried to put more weight on theory and common sense. But some reliance on theory and common sense is just unavoidable. So, this is a question of how much reliance we put on that, not whether to do it at all. For example, to make judgements about whether we should act on the evidence of whether masks work, we need to make judgments about the external validity of studies, which necessarily involves making some theoretical judgements about the mechanism by which masks work, which the empirical studies confirm. The true logical extension of myopic empiricism is the inability to infer anything from any study. "We showed that one set of masks worked in a series of studies in US in 2020, but we don't have a study of this other set of masks works in Manchester in 2021, so we don't know whether they work". 

I tend to think it would be positive if scientists gave up on myopic empiricism and shifted to being more explicitly Bayesian. 

Thanks for this. I don't agree for scientists, at least in their published work, but I do agree that to an extent it's of course inevitable to bring in various other forms of reasoning to make subjective assessments that allow for inferences. So I think we're mostly arguing over extent.

My argument would basically be:

  1. Science made great progress when it agreed to focus its argument on empirical evidence and explanations of that evidence.
  2. Economics has (in my opinion) made great progress in moving from a focus on pure deduction and theory (akin to the Natural Philosophers pre-science) and focused more on generating careful empirical evidence (especially about cause and effect). Theory's job is then to explain that evidence. (Supply and demand models don't do a good job of explaining the minimum wage evidence that's been developed over more than a decade.)
  3. In forecasting, starting with base rates (a sort of naive form of empiricism) is best practice. Likewise, naive empiricism seems to work in business, sports, etc. Even descriptive and correlational data appears practically very useful.
  4. Therefore, I'd like to see EA and the rationality community stay rooted in empiricism as much as possible. That's not always an option of course, but empirically driven processes seem to beat pure deduction much of the time, even when the data available doesn't meet every standard of statistical inference. This still leaves plenty of room for the skilled Bayesian to weight things well, vet the evidence, depart from it when warranted etc.

I've not mentioned experts even once. I find the question of when and how much to defer to be quite difficult and I have no strong reaction to what I think is your view on that. My concern is with your justification of it. Anything that moves EA/rationality away from empiricism worries me. A bunch of smart people debating and deducing, largely cut off from observation and experiment, is a recipe for missing the mark. I know that's not truly what you're suggesting but that's where I'm coming from.

(Finally, I grant that the scale of the replication crisis, or put another way the social science field in question, matters a lot and I've not addressed it.)

[anonymous]3y13
0
0

2. I would disagree on economics. I view the turn of economics towards high causal identification and complete neglect of theory as a major error, for reasons I touch on here. The discipline has moved from investigating important things to trivial things with high causal identification. The trend towards empirical behavioural economics is also in my view a fad with almost no practical usefulness.  (To reiterate my point on the minimum wage - the negative findings are almost certainly false: it is  what you would expect to find for a small treatment effect and noisy data in observational studies. Before reading the literature and believing that the effect of a minimum wage increase of $3 is small but negative, I would still expect to find a lot of studies finding no effect because empirical research is not very good, so one should not update much on those negative findings. If you think the demand curve for low skilled labour is vertical, then the phenomenon of ~0 effect on native US wages after a massive influx of low skilled labour from Cuba  is inexplicable. And: the literature is very mixed - it's not like all the studies find no effect, that is a misconception)

3. I agree that focusing on base rates is important but that doesn't seem to get at the myopic empiricism issue. For example, the base rate of vaccine efficacy dropping off a cliff after 22 days is very low, but that was not established in the initial Astra Zeneca study. To form that judgement, one needs evidence from other domains, which myopic empiricists  ignore. 

4. I'm not sure where we disagree there. I don't think EAs should stay rooted in empiricism if that means 'form judgements only on the basis of the median published scientific study', which is the view I criticise. I'm not saying we should become less empirical - I think we should take account of theory but also empirical evidence from other domains, which as I discuss many other experts refuse to do in some cases. 

I'm not saying that we should be largely cut off from observation and experiment and should just deduce from theory. I'm saying that the myopic empiricist approach is not the right one. 

Makes sense on 3 and 4. Out of curiosity, what would change your mind on the minimum wage? If you don't find empirical economics valuable nor the views of experts (or at least don't put much stock in them) how would you decide whether supply and demand was better or worse theory than an alternative? The premises underlying traditional economic models are clearly not fully 100% always-and-everywhere true, so their conclusions need not be either. How do you decide about a theory's accuracy or usefulness if not by reference to evidence or expertise?

[anonymous]3y4
0
0

I think I would find it very hard to update on the view that the minimum wage reduces demand for labour. Maybe if there were an extremely well done RCT showing no effect from a large minimum wage increase of $10, I would update. Incidentally, here is discussion of an RCT on the minimum wage which illustrates where the observational studies might be going wrong. The RCT shows that employers reduced hours worked, which wouldn't show up in observational studies, which mainly study disemployment effects

I am very conscious of the fact that almost everyone I have ever tried to convince of this view on the minimum wage remains wholly unmoved. I should make it clear that I am in favour of redistribution through tax credits, subsidies for childcare and that kind of thing. I think the minimum wage is not a smart way to help lower income people. 

Maybe to try and see if I understand I should try to answer: it'd be a mix of judgment, empirical evidence (but much broader than the causal identification papers), deductive arguments that seem to have independent force, and maybe some deference to an interdisciplinary group with good judgment, not necessarily academics?

I buy two of your examples: in the case of masks, it seems clear now that the experts were wrong before, and in "First doses first", you present some new evidence that the priors were right.

On nutrition and lockdowns, you haven't convinced me that the point of view you're defending isn't the one that deference would arrive at anyway: it seems to me like the expert consensus is that lockdowns work and most nutritional fads are ignorable.

On minimum wage and alcohol during pregnancy, you've presented a conflict between evidence and priors, but I don't feel like you resolved the conflict: someone who believed the evidence proved the priors wrong won't find anything in your examples to change their minds. For drinking during pregnancy, I'm not even really convinced there is a conflict: I suspect the heart of the matter is what people mean by "safe", what risks or harms are small enough to be ignored.

I think in general there are for sure some cases where priors should be given more weight than they're currently afforded. But it also seems like there are often cases where intuitions are bad, where "it's more complicated than that" tends to dominate, where there are always more considerations or open uncertainties than one can adequately navigate on priors alone. I don't think this post helps me understand how to distinguish between those cases.

[anonymous]3y2
0
0

Hello, my argument was that there are certain groups of experts you can ignore or put less weight on because they have the wrong epistemology. I agree that the median expert might have got some of these cases right. (I'm not sure that's true in the case of nutrition however)

The point in all these cases re priors is that one should have a very strong prior, which will not be shifted much by flawed empirical research. One should have a strong prior that the efficacy of the vaccine won't drop off massively for the over 65s even before this is studied.  

One can see the priors vs evidence case for the minimum wage more formally using Bayes theorem.  Suppose my prior that minimum wages reduce demand for labour is 98%, which is reasonable. I then learn that one observational study has found that they  have no effect on demand for labour. Given the flaws in empirical research,  let's say there is a  30% chance of a study finding no effect conditional on there being an effect. Given this, we might put a symmetrical probability on a study finding no effect conditional on there being no effect  - a 70% chance of a null result if minimum wages in fact have no effect. 

Then my posterior is = (.3*.98)/(.3*98+.7*.02) = 95.5% 

So I am still very sure that minimum wages have no effect even if there is one study showing the contrary. FWIW, my reading of the evidence is that most studies do find an effect on demand for labour, so after assimilating it all, one would probably end up where one's prior was. This is why the value of information of research into the minimum wage is so low. 

On drinking in pregnancy, I don't think this is driven by people's view of acceptable risk, but rather by a myopic empiricist view of the world. Oster's book is the go-to for data-driven parents and she claims that small amounts of alcohol has no effect, not that it has a small effect but is worth the risk. (Incidentally, the latter claim is also clearly false - it obviously isn't worth the risk.)

On your final point, I don't think one can or should aim to give an account of whether relying on theory or common sense is always the right thing to do. I have highlighted some examples where failure to rely on theory and evidence from other domains leads people astray. Epistemology is complicated and this insight may of course not be true in all domains. For a comprehensive account of how to approach cases such as these, one cannot say much more than that the true theory of epistemology is Bayesianism and to apply that properly you need to be apprised of all of the relevant information in different fields.

Bluntly I think a prior of 98% is extremely unreasonable. I think that someone who had thoroughly studied the theory, all credible counterarguments against it, had long discussions about it with experts who disagreed, etc. could reasonably come to a belief that strong. An amateur who has undertaken a simplistic study of the basic elements of the situation can't IMO reasonably conclude that all the rest of that thought and debate would have a <2% chance of changing their mind.

Even in an extremely empirically grounded and verifiable theory like physics, for much of the history of the field, the dominant theoretical framework has had significant omissions or blind spots that would occasionally lead to faulty results when applied to areas that were previously unknown. Economic theory is much less reliable. I think you're correct to highlight that economic data can be unreliable too, and it's certainly true that many people overestimate the size of Bayesian updates based on shaky data, and should perhaps stick to their priors more. But let's not kid ourselves about how good our cutting edge of theoretical understanding is in fields like economics and medicine – and let's not kid ourselves that nonspecialist amateurs can reach even that level of accuracy.

[anonymous]3y4
0
0

This is maybe getting too bogged down in the object-level. The general point is that if you have a confident prior, you are not going to update on uncertain observational evidence very much. My argument in the main post is that ignoring your prior entirely is clearly not correct and that is driving a lot of the mistaken opinions I outline.

Tangentially, I stand by my position on the object-level - I actually think that 98% is too low! For any randomly selected good I can think of, I would expect a price floor to reduce demand for it in >99% of cases. Common sense aside... The only theoretical reason this might not be true is if the market for labour is monopsonistic. That is just obviously not the case. There is also evidence from the immigration literature which suggests that native wages are barely affected by a massive influx of low skilled labour, which implies a near horizontal demand curve. There is also the point that if you are slightly Keynesian you think that involuntary employment is caused by the failure of wages to adjust downward; legally forbidding them from doing this must cause unemployment.  

What makes you believe the market for labor isn't monopsonistic?

To me it seems pretty plausible that the labor market is full of minor monopsonies. For example I prefer to work at a store closer rather than farther away from me which would give the local store some market power on my labor.

Or maybe I prefer to work at the only coffee place in my town as opposed to the only tea place due to my interest coffee.

I've strong upvoted Ben's points, and would add a couple of concerns:
* I don't know how in any particular situation one would usefully separate the object-level from the general principle. What heuristic would I follow to judge how far to defer to experts on banana growers in Honduras on the subject of banana-related politics?
* The less pure a science gets (using https://xkcd.com/435/ as a guide), the less we should be inclined to trust its authorities, but the less we should also be inclined to trust our own judgement - the relevant factors grow at a huge rate

So sticking to the object level and the eg of minimum wage, I would not update on a study that much, but strong agree with Ben that 98% is far too confident, since when you say 'the only theoretical reason', you presumably mean 'as determined by other social science theory'.

(In this particular case,  it seems like you're conflating the (simple and intuitive to me as well fwiw) individual effect of having to pay a higher wage reducing the desirability of hiring someone with the much more complex and much less intuitive claim that higher wages in general would reduce number of jobs in general - which is the sort of distinction that an expert in the field seems more likely to be able to draw.)

So my instinct is that Bayesians should only strongly disagree with experts in particular cases where they can link their disagreement to particular claims the experts have made that seem demonstrably wrong on Bayesian lights.

[anonymous]3y4
0
0

As I mention in the post, it's not just theory and common sense, but also evidence from other domains. If the demand curve for labour low skilled labour is vertical, then it is all but impossible that a massive influx of Cuban workers during the Mariel boatlift had close to zero effect on native US wages. Nevertheless, that is what the evidence suggests. 

I am happy to be told of other theoretical explanations of why minimum wages don't reduce demand for labour. The ones I am aware of in the literature are monopsonistic buyer of labour (clearly not the case), or one could give up on the view that firms are systematically profit-seeking (also doesn't seem true). 

The claims that are  wrong are the ones I highlight in the post, viz that the empirical evidence is all that matters when forming believe about the minimum wage. Most empirical research isn't that good and cannot distinguish signal from noise when there are small treatment effects e.g. the Card and Krueger research that started the whole debate off got their data by telephoning fast food restaurants.

I think economics is especially prone to this kind of "It's more complicated than that" issue. The idea that firms will reduce employment in response to higher minimum wage places a great deal of faith in the efficiency of markets. There are plenty of ways that an increased cost of labor wouldn't lead to lower demand: there might be resistance to firing employees because of social pressures; institutions may simply remain stuck in thinking that a certain number of people are necessary to do all the work that needs to be done, and be resistant to changing those attitudes. Consider the phenomenon of "bullshit jobs". If you've ever worked in an office, in the public or private sector, you've probably noticed that a huge number of employees seem to do little of substance. Even as someone who worked a minimum wage job in a department store, many of my co-workers seemed to do little if anything of use, and yet no effort was made to ensure that everyone was being productive. 

If anything, I would argue that the idea that markets are, by default, perfectly efficient (or close to perfectly efficient) goes against the lived experience of me and the people I know, and my prior is to disbelieve arguments predicated on it, unless there is some specific evidence or particularly good reason to think that the market would be very efficient in a specific case (such as securities trading, where there are a huge number of smart, well-qualified people working very hard to exploit any inefficiencies to make absurd amounts of money)

Great post! The main reason academics suffer from "myopic empiricism" is that they're optimising for legibility (an information source is "legible" if it can be easily trusted by others), both in their information intake and output. Or, more realistically, they're optimising for publishing esteemable papers, and since they can't reference non-legible sources of evidence, they'll be less interested in attending to them. One way to think about it is that "myopic academics" are trapped in an information bubble that repels non-legible information.

And I think this is really important. We need a source of highly legible data, and academic journals provide exactly that (uh, in theory). It only starts being a big problem once those papers start offering conclusions about the real world while refusing to leave their legibility bubble. And that sums up all the failures you've listed in the article.

The moral of the story is this: scientists really should optimise for legibility in their data production, and this is a good thing, but if they're going to offer real-world advice, they better be able to step out of their legibility bubble.

Or, more realistically, they're optimising for publishing esteemable papers, and since they can't reference non-legible sources of evidence, they'll be less interested in attending to them.

I think this is broadly right.

The main reason academics suffer from "myopic empiricism" is that they're optimising for legibility (an information source is "legible" if it can be easily trusted by others), both in their information intake and output.

I don't think this is quite right.

It seems pretty unclear to me whether the approach academics are taking is actually more legible than the approach Halstead recommends. 

And this is whether we use "legible" to mean: 

  1. "how easily can others understand why they should trust this (even if they lack context on the speaker, lack a shared worldview, etc.)", or
  2. "how easily can others simply understand how the speaker arrived at the conclusions they've arrived at"
    1. The second sense is similar to Luke Muehlhauser's concept of "reasoning transparency"; I think that that post is great, and I'd like it if more people followed its advice.

For example, academics often base their conclusions mostly on statistical methods that almost no laypeople, policymakers, etc. would understand; often even use datasets they haven't made public; and sometimes don't report key parts of their methods/analysis (e.g., what questions were used in a survey, how they coded the results, whether they tried other statistical techniques first). Sometimes the main way people will understand how they arrived at their conclusions and why to trust them is "they're academics, so they must know what they're doing" - but then we have the replication crisis etc., so that by itself doesn't seem sufficient.

(To be clear, I'm not exactly anti-academia. I published a paper myself, and think academia does produce a lot of value.)

Meanwhile, the sort of reasoning Halstead gives in this post is mostly very easy to understand and assess the reasonableness of. This even applies to potentially assessing Halstead's reasoning as not very good - some commenters disagreed with parts of the reasoning, and it was relatively easy for them to figure out and explain where they disagreed, as the points were made in quite "legible" ways.

(Of course, Halstead probably deliberately chose relatively clear-cut cases, so this might not be a fair comparison.)

This comes back to me being a big fan of reasoning transparency.

That said, I'm not necessarily said that those academics are just not virtuous and that if I were in their shoes I'd be more virtuous - I understand that the incentives they face push against full reasoning transparency, and that's just an unfortunate situation that's not their fault. Though I do suspect that it'd be good for more academics to (1) increase their reasoning transparency a bit, in ways that don't conflict too much with the incentive structures they face, and to (2) try to advocate for more reasoning transparency by others and for tweaking the incentives. (But this is a quick hot take; I haven't spent a long time thinking about this.)

Thanks for writing this. I liked the examples and I thought this point, while obvious in retrospect, wasn't originally clear in my mind:

Many subject matter experts are not experts on epistemology - on whether Bayesianism is true. So, this approach does not obviously violate epistemic modesty.

I can think of a few ways increasing minimum wages might increase employment at least for small enough increases (for large enough increases, no one can hire you):

  1. Market wages may be below wages that maximize productivity/efficiency, since increasing wages increases productivity. Setting higher wages could therefore in principle allow a company to hire more people. See efficiency wages. I think companies by default will pay low-skilled workers as little as possible, not knowing they can do better by increasing wages, so it wouldn't be too surprising if increasing the minimum wage sometimes increased employment. Walmart has been voluntarily increasing wages, and it might be good policy. Of course, a minimum wage is a pretty one-size-fits all, and it might be too high for some companies and too low for others.
  2. Increasing the minimum wage may reduce burdens to a community from suicide and substance abuse. I would guess that if someone in your family commits suicide or uses drugs, this has a negative impact on your own employment prospects. The effect on suicide rates seemed pretty small, though.

I thought your moderate drinking point was very interesting and connected some dots in my head. It seems plausible that the vast majority of causal relations are mild. If this is the case the majority of causality could be ‘occurring’ through effects too small to call significant. I guess that could seem pretty obvious but it isn’t something I ever heard talked about in my econometrics class nor in my RAing.

Concerning the scepticism about whether the AstraZeneca vaccine works on the over 65s, I think it's useful to keep in mind that the purpose of a clinical trial is not only to test for efficacy, but also to test for safety. Maybe some experts were concerned that older people would have more difficulties dealing with side effects, but chose to silence these possibly legitimate concerns and to only talk openly  about efficacy questions. If the world was utilitarian, then I think this would probably not be a very strong point. But as it stands, I think that a handful of deaths caused by a vaccine would cause a major backlash. (And, if you ask me, I would prefer a transparent communication strategy, but I'm not surprised if it turns out that they prefer something else.)

Curated and popular this week
Relevant opportunities