Thanks for the discussion on this Tom and Will.
I originally posted this article as, although it presents a very strong opinion on the matter and admittedly uses shock tactics by taking many values out of context (as pointed out by Romeo and Will), I thought that the sentiment was going in both same the direction that I personally felt science was moving and also with several other sources I'd read. I hadn't looked into any of authors other work, and although his publication record seems reasonable, he has pushed some fairly fringe views on nutrition and knowing this does reduce the weight I give to views in this article (thanks for digging into it Tom).
For a more balanced critic of recent scientific practice I'd recommend the book Real Science by John Ziman (I have a pdf, PM if you'd like a copy). It’s a long but fairly interesting read on the sociology of science from a naturalistic perspective, and claims that University research has moved from an 'academic' to 'post-academic' phase, characterised as the transition from the rigorous pursuit of knowledge to a focus on applications, which represents a convergence between academic and industrial research traditions. Although this may lead to more applications diffusing out of academia in the short-term, the 'post-academic' system is claimed to loose some important features of traditional research, like disinterestedness, organised skepticism, and universality, and tends to trade quality for quantity. The influence of societal interests (including corporate goals) would be expected to have much influence on the work done by 'post-academic' researchers.
Agreed with both Will and Tom that there are certainly are still lot of people doing good academic research, and how strongly you weight the balance will depend on which scientists you interact with. Personally, I ended up leaving Academia without pursuing a faculty position (in-part) because I felt I the push to use excessive spin and hype in order to publish my work and attract funding was making it quite substanceless. Of course, this may have been specific to the field I was working in (invertebrate sensory neuroscience) and I'm glad to hear that you both have more positive outlooks.
Thanks for elaborating Will.
Agreed that the increase in funding for science will generally just increase the size of science, and the base assumption should be that the retraction rate will stay the same, which would lead to a roughly proportionate increase in the number of retractions with science funding. The 700% vs. 900% roughly agrees with that assumption (although it could still be that the reasons for retraction change over time).
The idea of increasing retractions being a beneficial sign of better epistemic standards is interesting. My observation is that papers are usually basically only retracted if scientific fraud or misconduct was committed (e.g. falsifying or manipulating research data) - questionable research practices (e.g. P-hacking, optional stopping or HARKing), failure to replicate, or even technical errors don't usually lead to a retraction (Wikipedia also notes that plagiarism is a common cause of retractions). It is a pity there is no ground truth for scientific misconduct to reference the retraction rate against.
Aside, this summary of the influence of retractions and failure to replicate on later citations may be of interest. Thankfully, retraction usually has a strong reduction on the amount of citations the retracted paper receives.
I agree that it's an extreme stance and probably overly-general (although the specificity to public health and biomedical research is noted in the article).
Still, my feeling is that this is closer to the truth than we'd want. For instance, from working in three research groups (robotics, neuroscience, basic biology), I've seen that the topic (e.g. to round out somebody's profile) and participants (e.g re-doing experiments somebody else did so they don't have to be included as an author, instead of just using their results directly) of a paper are often selected mainly on perceived career benefits rather than scientific merit. This is particularly true when the research is driven by junior researchers rather than established professors, as the value of papers to former is much more about if they will help get grants and a faculty position rather than their scientific merit. For example, it's very common that a group of post-docs and PhDs will collaborate to produce a paper without a professor to 'demonstrate' their independence, but these collaborations often just end up describing an orphan finding or obscure method that will never be really be followed up on, and the junior researchers time could arguable have produced more scientifically meaningful results if they focused on their main project. Of course, its hard to evaluate how such practices influence academic progress in the long run, but they seem inefficient in the short-term and stem from a perverse incentive of careerism.
My impression is that questionable research practices probably vary a lot by research field, and the fields most susceptible to using poor practices are probably ones where the value of the findings won't really be known for a long time, like basic biology research. My experience in neuroscience and biology is that much more 'spin', speculation, and story telling goes into presenting the biological findings than there was in robotics (where results are usually clearer steps along a path towards a goal). While a certain amount of story telling is required to present a research finding convincingly, it has become a bit of a one-up game in biology where your work really has to be presented as a critical step towards an applied outcome (like curing a disease, or inspiring a new type of material) for anybody to take it seriously, even when it's clearly blue-sky research that hasn't yet found an application.
As for the author, it looks like he is no longer working in Academia. From his publication record it looks like he was quite productive for a mid-career researcher, and although he may have an axe to grind (presumably he applied for many faculty positions but didn't get any, common story) being outside the Ivory Tower can provide a lot more perspective about it's failings than what you get from inside it.
Good point. Unfortunately the Economist article referenced for this number is pay-walled for me and I am not sure if it indicates the total number of clinical trial participants during that time.
Your comment got me interested so I did some quick googling. In the US in 2009 there were 10,974 registered trials with 2.8 Million participants, and in the EU the median number of patients studied for a drug to be approved was 1,708 (during the same time window). I couldn't quickly find the average length of a clinical trial.
I expect 80,000 patients would be at most 1% of population of total clinical trial participants during that 10 year window, so this claim might be a bit over-emphasised (although it does seem striking at first read).
Jason Crawford is writing about the history of many industrial advances at Roots of Progress. I think his approach is complementary to yours, and he describes it at:
This seems like an interesting way of comparing the results of different types of design solutions.One important thing to consider is that evolution was under a lot of additional constraints compared to engineers when 'designing' organisms. For instance, reactions occur at room temperature and with organic chemistry, many organisms are self-replicating and self-assembling, energy and materials are usually limited to what an organism can collect itself. And rather than optimising for any specific parameter, evolution is just aiming for an organism to survive and reproduce - so few solutions will be optimal in terms of performance/efficiency unless there was a strong evolutionary pressure for them to be so. My experience with bio-inspired design is that it is usually best to look to biology for high-efficiency solutions as resource scarcity is a constant in most environments. High-performance biology is seen in microscopic structures, which probably still out-perform engineered solutions in many areas.
I think something else to consider is that familiarity can also build a passionate interest that is hard to let go of.
In the case of Sue the Poet, it's not that she's was unskilled and looking for something interesting, she's already found writing and, as described, practiced this a lot and found she is a skilled and (potentially) successful writer. Likewise, I assume that your friend the computer scientist has already studied computer science for a while and has become quite skilled at it, so its less appealing to start from scratch with physics (there would be some skills in common between CS and physics, but it will probably still feel like a big step-back on the learning curve).
While there is an element of sunk-cost fallacy here, I think that people who've done training and found that they are skilled at something are probably less likely to want to change their interest than somebody who has experience and found that they are un-skilled, or otherwise unsuccessful at their first interest. This seems like it could create a perverse incentive as generally-talented people who are highly successful in their first field could be disincentivized from trying to move into a more important field where they could have a larger impact. In academia there are often programs to encourage interdisciplinary research, but I wonder if the people these draw in may tend to be those that aren't particularly successful in their original field? (I consider myself a interdisciplinary scientist and can admit there is a bit of truth in that)
In line with this, I think it's good that EA/80k posts often emphasize the value of testing out a variety of promising career paths, not just picking the subject you are either most interested in or judge as most important when entering college. Maybe it could also be good to pre-commit to testing some number of options for a certain time (say 4 fields x 6 months) before comparing your interest and ability between them to avoid the temptation to commit to the first one grabs your attention. I know a lot of graduate courses do something similar with lab rotations, although I don't know how common this is elsewhere in career planning/education.
Thanks for posting this. I think that there is a lot more scope for the INT framework to be used by researchers outside of the top-priority EA areas. From personal experience, if you come into EA as an experienced researcher from a field outside the priority areas it's somewhat hard to connect with the existing resources unless you're willing to change fields.
But I think there would benefits from more general outreach to scientists/academic working in other areas. For instance, nudging researchers to think about the potential impacts/consequences of their work could encourage a norm of selecting impactful, not just interesting, projects (academic research already encourages working on neglected/original and tractable problems) and some may also pass this idea on to their students who may be better positioned to transition to work on a top-priority EA area.
Also, it may be worth considering that in many cases preprints are considered much more 'citeable' in academic articles than general webpages/blog posts would be. I think having the DOI is seen as a mark of permanence, which is considered superior to just having a permalink to the accessed version.
Peter, do you have any tips for being productive while doing independent research and other work in parallel? I'm also trying to do both scientific research and scientific consulting at the same time. I've found my two major difficulties are slowed productivity while context switching (which I usually need to do several times a week, between projects in very different fields) and feeling obliged to prioritize time/energy on my clients research projects in front of my own (regardless of what I consider their relative importance/interest to be). I'd be interested to know how you deal with these or similar challenges.