A very pessimistic view on the state of research quality in the US, particularly in public health research. Some choice quotes:


My experiences at four research universities and as a National Institutes of Health (NIH) research fellow taught me that the relentless pursuit of taxpayer funding has eliminated curiosity, basic competence, and scientific integrity in many fields.

Yet, more importantly, training in “science” is now tantamount to grant-writing and learning how to obtain funding. Organized skepticism, critical thinking, and methodological rigor, if present at all, are afterthoughts.


From 1970 to 2010, as taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent, with most due to misconduct.


The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor.


academic research is often “conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure.” In other words, taxpayers fund studies that are conducted for non-scientific reasons such as career advancement


Incompetence in concert with a lack of accountability and political or personal agendas has grave consequences:  *The Economist*  stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.


Still, there the author says there is hope for reform. The last three paragraphs suggest abolishing overheads, have limits on the number of grants received by and the maximum age of PIs, and preventing the use of public funding for publicity.

Comments26
Sorted by Click to highlight new comments since: Today at 9:50 PM

> stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.

we can't know if this is a good or bad number without context.

Good point. Unfortunately the Economist article referenced for this number is pay-walled for me and I am not sure if it indicates the total number of clinical trial participants during that time.

Your comment got me interested so I did some quick googling. In the US in 2009 there were 10,974 registered trials with 2.8 Million participants, and in the EU the median number of patients studied for a drug to be approved was 1,708 (during the same time window). I couldn't quickly find the average length of a clinical trial.

I expect 80,000 patients would be at most 1% of population of total clinical trial participants during that 10 year window, so this claim might be a bit over-emphasised (although it does seem striking at first read).

I would argue the article is extremely pessimistic.

Yes, funds sometimes get misallocated or are given to people who have committed fraud.

More often, they go to hard-working researchers who really don't make that much at all...people who hate fake or misleading scientific claims more than the average taxpayer.

And yes, there's a replication crisis...that people are aware of working to address.

In short, I think the author uses an extremely broad brush: "The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor."

And yet, scientific breakthroughs happen all the time and the world is better for it.

In short, maybe the author is burnt out or has only ever worked with poor colleagues? Or hasn't been funded in a while?

Most of the researchers I've met are honest and hard-working and doing their best to get it right, even in the face of challenging questions and strained resources.

I agree that it's an extreme stance and probably overly-general (although the specificity to public health and biomedical research is noted in the article).

Still, my feeling is that this is closer to the truth than we'd want. For instance, from working in three research groups (robotics, neuroscience, basic biology), I've seen that the topic (e.g. to round out somebody's profile) and participants (e.g re-doing experiments somebody else did so they don't have to be included as an author, instead of just using their results directly) of a paper are often selected mainly on perceived career benefits rather than scientific merit. This is particularly true when the research is driven by junior researchers rather than established professors, as the value of papers to former is much more about if they will help get grants and a faculty position rather than their scientific merit. For example, it's very common that a group of post-docs and PhDs will collaborate to produce a paper without a professor to 'demonstrate' their independence, but these collaborations often just end up describing an orphan finding or obscure method that will never be really be followed up on, and the junior researchers time could arguable have produced more scientifically meaningful results if they focused on their main project. Of course, its hard to evaluate how such practices influence academic progress in the long run, but they seem inefficient in the short-term and stem from a perverse incentive of careerism.

My impression is that questionable research practices probably vary a lot by research field, and the fields most susceptible to using poor practices are probably ones where the value of the findings won't really be known for a long time, like basic biology research. My experience in neuroscience and biology is that much more 'spin', speculation, and story telling goes into presenting the biological findings than there was in robotics (where results are usually clearer steps along a path towards a goal). While a certain amount of story telling is required to present a research finding convincingly, it has become a bit of a one-up game in biology where your work really has to be presented as a critical step towards an applied outcome (like curing a disease, or inspiring a new type of material) for anybody to take it seriously, even when it's clearly blue-sky research that hasn't yet found an application.

As for the author, it looks like he is no longer working in Academia. From his publication record it looks like he was quite productive for a mid-career researcher, and although he may have an axe to grind (presumably he applied for many faculty positions but didn't get any, common story) being outside the Ivory Tower can provide a lot more perspective about it's failings than what you get from inside it.

I wouldn't say that there are no inefficiencies in academia. There are inefficiencies in every line of work.

I would say that on the whole, a lot of great still work gets done.

I definitely wouldn't say that academia is rife with "incompetence in concert with a lack of accountability."

Sure, there are people with Ph.Ds who are not strong researchers. There are lot of them who are, though.

We may just disagree on the ratio of the two groups based on our own experiences.

In short, maybe the author is burnt out or has only ever worked with poor colleagues? Or hasn't been funded in a while?

I downvoted this comment based on this paragraph. Arch speculations that a position taken is probably due to inadequacies and personal frustrations of the author are nearly always uncharitable, unwarranted and, in my experience, well-correlated with sloppy and defensive thinking.

No, the guy probably isn't just mad because he couldn't cut it in academia.

I agree with this comment and retracted my upvote for the same reason, though I thought the rest of Tom's comment was quite reasonable (see Alexey Guzey for some examples of quiet scientific progress).

Thanks for your feedback.

I was trying to figure out why the author would be so, so critical of scientific research.

I would say he was downright uncharitable, in fact.

It turns out that he's also argued quite strongly that high levels of refined sugar in people's diets are no problem: e.g., https://www.sciencedaily.com/releases/2018/08/180827110730.htm

To do so, he has to throw aside mountains of scientific research. I would say his attack above is a necessary part of that effort.

So while I am concerned about inefficiencies in academic work and the waste of taxpayer dollars, I'm much more worried about the effects of corporate money on research.

So while I am concerned about inefficiencies in academic work and the waste of taxpayer dollars, I'm much more worried about the effects of corporate money on research.

Are there studies on whether corporate-funded research are of typically lower (or higher) quality than publicly-funded academic research? I can imagine it going either way, but I feel like I only have loose intuitions and anecdotes to go off of, and it'd be good to actually see data.

Good question. I did a quick google and came across Lisa Bero who seems to have done a huge amount of work on research integrity. From this popular article, it sounds like corporate funding is often problematic for the research process.

The article links to several systematic reviews her group has done, and the article 'Industry sponsorship and research outcome' does conclude that corporate funding leads to a bias in the published results:

Authors' conclusions: Sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources. Our analyses suggest the existence of an industry bias that cannot be explained by standard 'Risk of bias' assessments.

I just read the abstract this so I'm not sure if they tried to identify if this was solely due to publication bias or if corporate-funded research also tended to have other issues (e.g. less rigorous experimental designs or other questionable research practices).

Without a Google Scholar search, I'd just point to how industry dealt with things like asbestos, lead, tobacco, and greenhouse gas emissions.

Science definitely needs scrutiny, debate, and evidence. That said, whenever someone's loudly proclaiming that an entire field is corrupt and incorrect, it should raise suspicions.

What's more likely: that thousands of (generally underpaid) researchers in hundreds of competing labs worldwide are all in cahoots, or that a few dissenting voices are funded by industry to argue that point of view?

I agree that for topics where there are transparent, obvious, and one-sided financial incentives on one side, and the other side has approximate consensus among experts, I agree that the side with bad incentives (and are the numerical minority) are more suspicious.

However, when I think of industry funding for research, I mostly don't think of

a few dissenting voices are funded by industry to argue that point of view

I think more of stuff in the vein of people actually trying to figure things out (eg, BigTech funds a lot of applied and even theoretical CS research, especially in AI and distributed systems).

I'd just point to how industry dealt with things like asbestos, lead, tobacco, and greenhouse gas emissions

I don't know much about the other examples. I agree with greenhouse gases. My impression is that there was a lot of misinformation/bad science about vaping, and this was at least as much (and likely more) the fault of academic researchers as it was the fault of entrenched corporate interests.

I wouldn't expect a marked difference in the quality of non-controversial research whether funded by a national granting agency or private industry. That said, I'm not an expert on the topic, either.

As for "controversial" science in the sense of "any science that business/industry doesn't like," the pattern is quite similar whether we're talking about lead, asbestos, climate change, et cetera:

Find a couple of researchers who will play ball to say your product is safe despite all the evidence to the contrary. Point to this repeatedly any time the topic comes up. Ignore the mountain of evidence that says you're wrong at all costs, and undermine it any way you can. Buy as many politicians as you can to try and prevent regulation.

As for vaping, I'd need to see some examples of bad academic research. I'm not sure you can blame the scientists for the consequences of poorly-regulated businesses. They can only test the products that they're given and tell you whether they're safe. They can't tell you if a manufacturer is going to change the oil they use or decide to include heavy metals.

That's up to the regulators to prevent.

Yeah, I don't want to imply that I strongly support the original claims. I think there are lots of very serious problems with incentives and epistemics in science, but nevertheless that both the incentives and the epistemics of scientists are unusually good in important ways.

(As an anecdote that probably shouldn't be taken as strong evidence, but that I found striking, I once tried out the 2-4-6 test on my lab, and IIRC something like two-thirds of members got the right answer first-time, and both group leaders present did so fairly quickly.)

I'm also very worried about the effects of corporate funding on research, at least in some domains.

Yeah, the more I looked into the guy, the more his critique fit into context. His work finds a home on some websites of questionable repute. haha

And as you point out, the people you meet in academia generally don't tend to be as he's characterized them.

I would be willing to bet that he has a financial motive to argue against the prevailing scientific consensus, just as we see in other instances where facts turn out to be inconvenient for corporate interests.

Thanks for the discussion on this Tom and Will.

I originally posted this article as, although it presents a very strong opinion on the matter and admittedly uses shock tactics by taking many values out of context (as pointed out by Romeo and Will), I thought that the sentiment was going in both same the direction that I personally felt science was moving and also with several other sources I'd read. I hadn't looked into any of authors other work, and although his publication record seems reasonable, he has pushed some fairly fringe views on nutrition and knowing this does reduce the weight I give to views in this article (thanks for digging into it Tom).

For a more balanced critic of recent scientific practice I'd recommend the book Real Science by John Ziman (I have a pdf, PM if you'd like a copy). It’s a long but fairly interesting read on the sociology of science from a naturalistic perspective, and claims that University research has moved from an 'academic' to 'post-academic' phase, characterised as the transition from the rigorous pursuit of knowledge to a focus on applications, which represents a convergence between academic and industrial research traditions. Although this may lead to more applications diffusing out of academia in the short-term, the 'post-academic' system is claimed to loose some important features of traditional research, like disinterestedness, organised skepticism, and universality, and tends to trade quality for quantity. The influence of societal interests (including corporate goals) would be expected to have much influence on the work done by 'post-academic' researchers.

Agreed with both Will and Tom that there are certainly are still lot of people doing good academic research, and how strongly you weight the balance will depend on which scientists you interact with. Personally, I ended up leaving Academia without pursuing a faculty position (in-part) because I felt I the push to use excessive spin and hype in order to publish my work and attract funding was making it quite substanceless. Of course, this may have been specific to the field I was working in (invertebrate sensory neuroscience) and I'm glad to hear that you both have more positive outlooks.

Without making claims about the conclusions, I think this argument is of very poor quality and shouldn't update anyone in any direction.

"As taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent"

Taking all claims at face value, you should not be persuaded that more money causes retractions just because retractions increased roughly in proportion with the overall growth of the industry. I checked the cited work to see if there were any mitigating factors which justified making this claim (since maybe I didn't understand it, and since sometimes people make bad arguments for good conclusions) and it actually got worse - they ignored the low rate of retraction ( It's 0.2%), they compared US-only grants with global retractions, they did not account for increased oversight and standards, and so on.

The low quality of the claim, in combination with the fact that the central mission of this think tank is lobbying for reduced government spending in universities and increase political conservatism on campuses in North Carolina, suggests that the logical errors and mishandling of statistics we are seeing here is partisan motivated reasoning in action.

willbradshaw made the exact same point, earlier, and had lower karma. What's up with that?

EDIT: Retracted because the parent comment is substantive in different ways. Still, acknowledging the earlier comment would've been nice!

[This comment is no longer endorsed by its author]Reply

From 1970 to 2010, as taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent, with most due to misconduct.

https://www.tylervigen.com/spurious-correlations

Can you clarify the point you're trying to make with the reference to spurious correlations, Will? I don't think the author is trying to make any deep claim about causation here, but just pointing out that a growing amount of taxpayer money is wasted due to retractions. (I appreciate the point from other commenters that this is still presumably a small fraction of the total funding though and so might not be as big a concern as the author suggests.)

Sure.

Taken at face value, the claim is that taxpayer funding and number of retractions have increased over time, at rates not hugely different from one another. I think both can almost entirely be accounted for by an increase in the total number of researchers. If you have more researchers producing papers, this will result in both a big increase in funding required and in number of papers retracted without any change in the quality distribution.

I would want to see evidence for a big increase in retractions per number of researchers, researcher hours or some other aggregative measure before taking this seriously as a claim that science has got worse over time. It's well-known that if you don't control for the total number of people in a place or doing a thing, all sorts of things will correlate (homicides and priests, ice-cream sales and suicides, etc.).

More substantively, I also disagree with the claim that a big increase in retractions is evidence of scientific decline. Insofar as there has been any increase in the per-capita rate of retractions, I regard this as a sign of increasing epistemic standards, and think both editors and scientists are still way too reluctant to retract papers. It's like the replication crisis: the problems have always been there, but we only started paying attention to them recently. That's a good sign, not a bad one.

Thanks for elaborating Will.

Agreed that the increase in funding for science will generally just increase the size of science, and the base assumption should be that the retraction rate will stay the same, which would lead to a roughly proportionate increase in the number of retractions with science funding. The 700% vs. 900% roughly agrees with that assumption (although it could still be that the reasons for retraction change over time).

The idea of increasing retractions being a beneficial sign of better epistemic standards is interesting. My observation is that papers are usually basically only retracted if scientific fraud or misconduct was committed (e.g. falsifying or manipulating research data) - questionable research practices (e.g. P-hacking, optional stopping or HARKing), failure to replicate, or even technical errors don't usually lead to a retraction (Wikipedia also notes that plagiarism is a common cause of retractions). It is a pity there is no ground truth for scientific misconduct to reference the retraction rate against.

Aside, this summary of the influence of retractions and failure to replicate on later citations may be of interest. Thankfully, retraction usually has a strong reduction on the amount of citations the retracted paper receives.

Thanks Gavin.

I'd be interested in seeing data on the distribution of causes of retraction and how it's changed over time. I know RetractionWatch likes to say that scientists tend to underestimate the proportion of retractions that are down to fraud. I do think some (many?) retractions are due to serious technical errors with no implication of deliberate fraud or misconduct. I suspect RetractionWatch has data on this.

I'm not claiming that it's inevitably true that more retractions indicates better community epistemics, but I do think it's a big part of the story in this case. A paper retraction requires someone to notice that the paper is worthy of retraction, bring that to the editors and, very often, put a lot of pressure on the editors to retract the paper (who are usually extremely reluctant to do so). That requires people to be on the lookout for things that might need to be retracted and willing to put in the time and effort to get it retracted.

In the past this was very rare, and only extremely flagrant fraud or misconduct (or unusually honest scientists retracting their own work) led to retractions. Now, partly as a side consequence of the replication crisis but also more general (and incomplete) changes in norms, we have a lot more people who spend a lot of time actively searching for data manipulation and other retraction-worthy things in papers.

This is just the science version of the common claim that a recorded increase (or decrease) in the rate of a particular crime, or a particular mental disorder, or some such, is mainly due to changes in how closely we're looking for it.

Unrelatedly, I'm quite enjoying watching the karma on this comment go up and down. Currently at -1 karma after 7 votes. Interesting data on differing preferences over commenting norms.

Aren't grant lotteries a more obvious solution than the three you mention?

I think they could help with some things. But as  I wrote here, I am not sure if it would be appropriate to only fund academic research through lotteries. 

Curated and popular this week
Relevant opportunities