Hide table of contents

Summary: Creating the right incentive structures in science could make science more fluid, efficient, and painless. Indeed, there seem to be multiple reasons why people complain about how science is done now. In this post, I analyze some of the problems science has, and give some hints at ideas that could improve the situation. The aim of this post is to suggest science policy as a possible research area for EAs where it might be possible to do progress that results in better science.

Introduction

Science is one of the key enablers of progress in our world. Yet it seems to me that there are many ways it could be improved. It seems to me from anecdotal evidence that most scientists are in fact not happy about the incentive structure, procedures, evaluation, or career advancement (https://fivethirtyeight.com/features/science-isnt-broken). This is a complex system and problem, so it seems unlikely that a simple solution exists for all of these problems. Yet, being such an important engine of our society, some effort by EAs to understand it better would be a great use of resources. In the following, I give an overview of the main problems I see in science, and in some cases some ideas of how we could find a solution.

Problems and solutions

Publishing

One of the main problems scientists complain about a lot is the extremely high charges most journals apply to publications (see a discussion in https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science). The key reason why this is possible, is because science is a market of prestige. And it is not possible to grow the total amount of prestige, since the more prestigious other people are, the more diluted your prestige becomes. Editorials channel this natural scarcity via well-known journals that have become some sort of oligopoly, creating extremely profitable businesses.

Due to this high-recognition / high-price scheme, it seems unlikely that one could break the monopoly via the demand side (eg, from article authors). However, I also think that editorials may have committed an unforced error by relying on unpaid and volunteer editors and referees. While being an editor might give you some recognition, becoming a referee is often not taken much into consideration for career promotion purposes. In consequence, a possible solution is some kind of coordinated action by scientists (or universities) to decline being referees for high-fee journals. The reason why I think this has some chance of succeeding is that most scientists find being referees as a boring task you have to get over relatively quickly, as you barely benefit. Indeed, from my experience trying to publish, in quite a few cases the problem with the quality of referee reports appear due to their complete lack of interest in being referees in the first place. To solve this problem, universities could try to enforce by contract with their researchers not to volunteer them to high-fee journals, forcing those journals to lower their fees to access referees. Don’t take this too seriously though, it will probably need a lot of thought before a solution ends up working, if it ever does.

Specialization

One thing that Eliezer Yudkowski requests in his book “Inadequate equilibria” is that scientists have more specialization. In particular, there might be a few different tasks that commonly scientists have to do: a) Research, that is, coming up with good ideas, writing papers… b) Teaching, especially at universities. c) Research evaluation (being a referee). d) Managerial tasks, such as asking for grants. I agree with him that we need to split up work. Some people like, enjoy, and are better at teaching. Others, at doing research. I really don’t think one should be requested to do everything. In addition, dedicated science evaluators might help a lot with replication problems, referee quality, and speed…

One funny aspect is that if the previous section suggestion were to force editors to hire researchers to be editors and referees, they might become more specialized and better at doing it. We might even be able to figure out ways to evaluate the evaluators, something that now is impossible due to the created scarcity of unpaid referees. Professional evaluators also have the advantage of being more capable to enforce norms in sensitive fields such as bioengineering or AI.

Research evaluation

Evaluating research is perhaps one of the most difficult topics. It requires significant expertise, is very unpredictable, and seems to be hard to find reliable systematic solutions. In this topic, I think the AI (or perhaps Computer Science) research community is doing a great job (much better than in other areas) at innovating a lot on different peer review systems. I also think I remember that the rationalist community was thinking of some ways of using prediction markets to assess research quality.

It is really not clear to me what is the best way to do research evaluation, but perhaps someone interested in this problem could start by producing a meta-analysis of the different experiments (peer review systems) carried out and their conclusions, and further experiments that should be done. It seems to me that at the very least we can treat this as a social science problem at a macroscopic level, where the intervention is the publication method and the result is the accuracy of the predicted scientific impact.

Career advancement

It is also a common source of complaint in the scientific community that the research career is very unstable, relatively poorly paid, and extremely competitive. To be honest, this is probably a consequence of too many people enjoying doing science with respect to the number of available research jobs. Indeed, the probabilities of landing a tenured research position in academia seem to have been shrinking for quite some time (https://forum.effectivealtruism.org/posts/3TQTec6FKcMSRBT2T/estimation-of-probabilities-to-get-tenure-track-in-academia). I don’t know what should be done about it, but sometimes it can hurt the quality of the research being published. While I don’t have strong opinions on how the situation could be systematically improved, it is worth analyzing more in-depth.

Conclusion

In summary, I think “fixing science” is a problem from multiple angles. However, to the best of my knowledge, there seems to be a relatively little systematic study on how to improve its situation. For that reason, I tend to think this might be a cause that should receive some attention from the EA community.

Comments8
Sorted by Click to highlight new comments since: Today at 12:35 PM

Thanks for this! I've been thinking quite a bit about this (see some previous posts) and there is a bit of an emerging EA/metascience community, would be happy to chat if you're interested!

Some specific comments:

In consequence, a possible solution is some kind of coordinated action by scientists (or universities) to decline being referees for high-fee journals.

Could you elaborate the change in the system you envision as a result of something like this? My current thinking (but very open to being convinced otherwise) is that lower fees to access publications wouldn't really change anything fundamental about what science is being done, which makes it seem like a lot of work for limited gains?

I agree with him that we need to split up work. Some people like, enjoy, and are better at teaching. Others, at doing research. I really don’t think one should be requested to do everything. In addition, dedicated science evaluators might help a lot with replication problems, referee quality, and speed…

I think there is something here - I think it could be valuable to have more diverse career paths that would allow people to build on their strengths, rather than just having tasks depending on seniority. It also seems like something where it's not necessary to design one perfect system, but rather that different institutions could work with different models (just like different private companies work with different models of recruitment and internal career paths). I think it would be very interesting if someone would do (have done?) an overview of how this looks today globally, perhaps there are already some institutions that have quite different ways of allocating tasks?

My crux here would be that even though I think this has a potential to make research much more enjoyable to a broader group, it's a bit unclear if it would actually lead to better science being done. I want to think that it would, but I can't really make a strong argument for it. I do think efficiency would increase, but I'm not sure we'd work on more important questions or do work of higher quality because of it (though we might!).

this is probably a consequence of too many people enjoying doing science with respect to the number of available research jobs

You could be right, but it's not obvious to me. I have the impression a lot of people doing science are finding it quite hard and not very enjoyable, especially on junior levels.  It would be very interesting to know more about what attracts people to science careers, and what reasons for staying are - I think it's very possible that status/being in a completely academic social context that makes other career paths abstract plays an important role. Anecdotally, I dropped out of a phd position after one year, and even though I really didn't enjoy it dropping out felt like a huge failure at the time in a way that voluntarily quitting a "normal" job would not. 
 

Hey C Tilli, Thanks for commenting!

My current thinking (but very open to being convinced otherwise) is that lower fees to access publications wouldn't really change anything fundamental about what science is being done, which makes it seem like a lot of work for limited gains?

My intuition is that forcing lower fees would make more money available for other parts of science. After all, most research is still done in universities and government agencies, and they usually have a limited budget to distribute each year. And honestly, I'm not sure if it so much works, it should be something that confederations of universities should be able to agree upon. I don't know. To me seems much harder to try to force people not to publish in those reputable journals. In a sense, I feel they are extracting rents from the environment in a damaging way. In particular, it seems to be possible from a back-of-the-envelope calculation that ~10% of the cost of making science goes to these people.

My crux here would be that even though I think this has the potential to make research much more enjoyable to a broader group, it's a bit unclear if it would actually lead to better science being done. I want to think that it would, but I can't really make a strong argument for it.

My intuition for this was that when you have a single kind of task, you are both able to specialize more but also allows you to concentrate more on the work you are doing, rather than having to jump between them, which I believe kills (my) productivity: https://80000hours.org/podcast/episodes/cal-newport-industrial-revolution-for-office-work/. That being said, I might be wrong.

It would be very interesting to know more about what attracts people to science careers, and what reasons for staying are - I think it's very possible that status/being in a completely academic social context that makes other career paths abstract plays an important role.

I think that you are right in that social status can play some role, but I don't think this is the leading reason. The leading reason, to me, seems to be that it gives people purpose in a way that other things don't so much. In a way, people going into and sticking to academia seems more like a struggle to do work that advances human knowledge or their own knowledge; even if the environment is kind of shitty. That's why it feels weird to drop out of academic research. Of course, this is very subjective and I might be wrong.

Thanks for this post. I agree with many of your points. I see science as a problem-solving engine, and yes, if it's not operating as well as it could be then that's a huge opportunity cost for issues such as treating diseases, transitioning to clean energy/meat, etc.

One thought about the publishing and incentives: If funders can be convinced not to care about publications or to weigh other efforts the same or more, e.g. posting and commenting on pre-prints, then that could break the strangle that the publishing industry holds on the scientific enterprise. Institutions mostly care about how well a researcher raises money. If they see that they can hire a professor who doesn't have lofty publications, but will have access to funding, then I suspect traditional publications will become increasingly moot. To a certain degree, we see this with Math and Physics, where many impactful papers just get published as pre-prints on ArXiV. The papers may never actually be traditionally published. 

Life sciences research still has a way to go. I was thinking if private funders such as HHMI or the Gates Foundation could be lobbied to weigh publications less in their funding decisions, then that may help here. 

I don't know. I agree that we give too much weight to papers, but then, what would we substitute this with? How likely is it that we will face the same problem in a new competitive system? I think it is worth exploring, but my belief is that this problem comes from the competition, not from the articles. Any other competitive system would most likely have similar problems, in my opinion.

Indeed. To be clear, when I refer to publications, I refer to traditionally published ones: where the papers are submitted to journals, editors will determine if it's impactful enough, and then it's sent out to review. This is such a belabored process, especially in the age of the internet. And for what it's worth, the competition is exacerbated by the lack of space in lofty journals. 

And sure, we can't jettison publications without something taking it's place. It still could be papers, just not traditionally published ones. We saw this play out during the pandemic where papers on Coronavirus were placed on pre-print servers such as BioRxiv and MedRxiv. Comments and refutations were posted in real-time. In my view, science needs to proceed in this direction. We need more real-time science. We can't have science that remains hidden from the public view because Reviewer #3 thinks one more experiment is needed, thus dragging out publication by another year. 

We're  not going to diminish competition without creating more permanent positions in academia or opportunities for academic scientists. Competition is so fierce to become a professor, which is in short supply and seemingly the only path to work permanently in academia. One idea is to have more non-traditional routes, such as a loftier, permanent post-doctoral scientist positions. These could be scientists who don't run a lab, but may work in one and do primary research themselves. 

I see this  as one of those problems that could be addressed with a "trickle-down solution": Once the top universities and/or academic journals change their policies, it is likely that all the rest will copy them and follow suit. I don't know if there is any type of "lobbying" we can do to influence these institutions but it seems like a potentially straightforward and tractable path.

Excellent post. Could you expand further on your point:

“I think the AI (or perhaps Computer Science) research community is doing a great job (much better than in other areas) at innovating a lot on different peer review systems”

It would be interesting to see how things are done differently in these fields. Even a link to other resources would be great. Thanks.

Hey Eric, Thanks! I think that in AI conferences, organizers have played around with a few things:

  • Some conferences have a two-phase review system (AAAI) , others only one (NeurIPS).
  • Sometimes the chair might read and discard papers beforehand.
  • Reviews are sometimes published in OpenReview so that everyone can see them.
  • Referees are asked to provide confidence ratings in their assessments. Etcetera (see for example https://blog.ml.cmu.edu/2020/12/01/icml2020exp/).

In Physics (my field) things look much lamer. To start with, we only publish in Journals, which might be ok, but means an ever-larger reviewing process length. Single-blind is still widely used. Sharing the code is fully optional (just say that you'll provide it upon reasonable request). And there are often just 2 (or even 1) referees, if you're lucky you may go up to 4. But the problem is the lack of assessment of the reviewing process: I don't think they are trying to make any efforts to improve it except to "look good" (open access stuff, maybe double-blind)... Since we do not conduct experiments or try to improve it, it stays behind

I'd bet that in other sciences it is even worse: chemists and biologists are not even used to using arxiv equivalent. Social sciences... social sciences is unclear to me, but seems probably worse (p-value tweaking...). 😅

Curated and popular this week
Relevant opportunities