A very pessimistic view on the state of research quality in the US, particularly in public health research. Some choice quotes:
My experiences at four research universities and as a National Institutes of Health (NIH) research fellow taught me that the relentless pursuit of taxpayer funding has eliminated curiosity, basic competence, and scientific integrity in many fields.
Yet, more importantly, training in “science” is now tantamount to grant-writing and learning how to obtain funding. Organized skepticism, critical thinking, and methodological rigor, if present at all, are afterthoughts.
From 1970 to 2010, as taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent, with most due to misconduct.
The widespread inability of publicly funded researchers to generate valid, reproducible findings is a testament to the failure of universities to properly train scientists and instill intellectual and methodologic rigor.
academic research is often “conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure.” In other words, taxpayers fund studies that are conducted for non-scientific reasons such as career advancement
Incompetence in concert with a lack of accountability and political or personal agendas has grave consequences: *The Economist* stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.
Still, there the author says there is hope for reform. The last three paragraphs suggest abolishing overheads, have limits on the number of grants received by and the maximum age of PIs, and preventing the use of public funding for publicity.
I agree that it's an extreme stance and probably overly-general (although the specificity to public health and biomedical research is noted in the article).
Still, my feeling is that this is closer to the truth than we'd want. For instance, from working in three research groups (robotics, neuroscience, basic biology), I've seen that the topic (e.g. to round out somebody's profile) and participants (e.g re-doing experiments somebody else did so they don't have to be included as an author, instead of just using their results directly) of a paper are often selected mainly on perceived career benefits rather than scientific merit. This is particularly true when the research is driven by junior researchers rather than established professors, as the value of papers to former is much more about if they will help get grants and a faculty position rather than their scientific merit. For example, it's very common that a group of post-docs and PhDs will collaborate to produce a paper without a professor to 'demonstrate' their independence, but these collaborations often just end up describing an orphan finding or obscure method that will never be really be followed up on, and the junior researchers time could arguable have produced more scientifically meaningful results if they focused on their main project. Of course, its hard to evaluate how such practices influence academic progress in the long run, but they seem inefficient in the short-term and stem from a perverse incentive of careerism.
My impression is that questionable research practices probably vary a lot by research field, and the fields most susceptible to using poor practices are probably ones where the value of the findings won't really be known for a long time, like basic biology research. My experience in neuroscience and biology is that much more 'spin', speculation, and story telling goes into presenting the biological findings than there was in robotics (where results are usually clearer steps along a path towards a goal). While a certain amount of story telling is required to present a research finding convincingly, it has become a bit of a one-up game in biology where your work really has to be presented as a critical step towards an applied outcome (like curing a disease, or inspiring a new type of material) for anybody to take it seriously, even when it's clearly blue-sky research that hasn't yet found an application.
As for the author, it looks like he is no longer working in Academia. From his publication record it looks like he was quite productive for a mid-career researcher, and although he may have an axe to grind (presumably he applied for many faculty positions but didn't get any, common story) being outside the Ivory Tower can provide a lot more perspective about it's failings than what you get from inside it.