Hide table of contents

Basic science has done a lot for increasing welfare and future potential through subsequent technological innovations. Science is mainly done in academia and is highly influenced by funding sources (such as government funding, venture capital, and philanthropy), and its own cultural norms.

In this post, I sketch some of my current views on how we can improve prioritization among scientific projects to align better with societal goals. This is more of a dump that's supposed to help me to articulate a bit more clearly how I currently think about prioritization in science, which is not original and inspired by literature such as this and some personal experience. I plan to continue to engage with this topic more in the future and write clearer, literature-supported, and better-argumented posts about scientific prioritization.

Inefficiencies in Academia

Iain Chalmers, Paul Glasziou - Avoidable Waste in the Production and Reporting of Research Evidence argues that there is "at least 85% waste" in biomedical research coming from inappropriate research design, biased outcomes, and inaccessible/unorganized information. In recent years, as a response to the replication crisis, these issues are slowly being addressed by the open science and metascience communities, and there is a growing understanding of what are the systematic problems damaging scientific reliability and usability. There is still plenty of important work to do there, such as this Registered Reports project which is positively evaluated by Let's Fund.

In addition to these sources of inefficiencies, Chalmers and Glasziou raise the issue of poorly selected research goals. In the applicative biomed context, there is a problem where academic research results are not necessarily useful for clinicians or only address minor concerns. Generally, I expect similar problems to occur elsewhere throughout science - researchers work on questions that do not seem particularly promising from a societal welfare perspective.

The case for academic freedom

It is generally believed within academic circles that the academic pursuit of knowledge should be done with minimal outside interference with the inquiry process of the scientist. The only directive that a scientist should follow is that of her expert peers, as they are the only ones who can understand the intricacies involved and how to best advance the state of the art.

What goes against academic freedom? It mostly involves directing resources (funding and academic positions) to topics or people governed by regulations outside of peer-review style assessments of scientific merit. But also includes consequences of the relationships between academia and the private sector, undermining of academic superiority in decision making, and other examples where the incentives of the individual researcher will not align with the collective academic opinion for the production of the most important scientific knowledge.

The main question that we are interested in is "To what degree does an increase in academic freedom leads to better societal welfare". The supporters for academic freedom usually argue that it leads to better societal outcomes in the following way:

  1. The scientific community follows it's combined curiosity and thus acts to maximize how much knowledge it can produce.
  2. Throughout history, the major breakthroughs in science came from linking surprisingly unrelated results together. We cannot know what scientific knowledge would translate to applications.
  3. Thus, we should maximize increased knowledge, without favoring one direction over another, which is what the academic system is designed to do.

I didn't try to steelman this argument, and there is a lot more to be said in favor of academic freedom. However, here I'd like to point out several reasons to be skeptical of its conclusion.

  1. it's not clear to me how much productivity loss is there when scientists are working on stuff they are less intrinsically interested in. The situation seems to be fine in commercial companies, people generally develop an interest in what they become good at over time, and it is definitely possible to maintain a large degree of autonomy. Perhaps more importantly, shifting resources around so that more important projects could have more researchers working on them would mean that there would be more room for universities to hire researchers and research students to work on these topics - these researchers would supposedly have comparable levels of authentic interest.
  2. The peer-review process in science incentivizes researchers to work on what is considered the most prestigious, and works are judged by senior researchers with competing interests. It's not clear to me that, in fact, most researchers can strictly work on what is most interesting for them.
  3. It is not surprising that increased knowledge has unexpected uses, but that doesn't mean that we can't do better by prioritizing according to what seems more promising.
  4. Furthermore, targeted ARPA-style programs seem to have been successful (although there are other factors involved). (By the way, see this nice recent write-up on why DARPA works by Ben Reinhardt)
  5. EA world-view seems to depend on us being able to find out what does more good. I find myself having a prior with more weight on the general plausibility of improved planning and prioritization. For me, this clearly indicates that this is an important question that I need to address for myself and I'm not really sure in what way this observation should impact my posterior.
  6. Generally, the above argument for preferring academic freedom above prioritization interventions seems suspiciously conservative. People who voice these arguments are mostly academics who have a strong interest in maintaining the status quo in which they have both freedom and high status.

What can we do?


There doesn't seem to be an agreement on whether (basic) science should strive for more or less academic freedom than exists today. Furthermore, even if there should be more high-level prioritization, it is not yet clear how to actually do the prioritization well.

I'm currently thinking that it is very likely that there are great opportunities to prioritize more in academia, potentially massively more. I think that it is somewhat unlikely but still possible (say, 25%) that there are clear arguments supporting that conclusion in existing literature in a manner which would convince most knowledgable people with an EA-perspective (that is, some sort of welfare maximization + a reasoning system which puts empirical evidence and rational reasoning in high regard).

Questions currently addressed in the EA community about prioritization can shed some light on what general topics would be of more interest, and what kinds of technological advances would be better. However, we are very far from being able to systematically prioritize and evaluate scientific projects well enough.

Support highly prioritized scientific projects

Even if Academia is not optimally organized for doing good, there are many people working on very valuable projects and there are ways to make non-systemic focused changes on targeted disciplines. Finding highly promising research projects and supporting them can be highly valuable.

Some examples of initiatives doing that are

  1. Open Philanthropy. They seek neglected opportunities (mostly) within Biomed, check their scientific validity and consider the people involved, and fund the ones with the most promising impact.
  2. The Good Food Institute. They identify key scientific questions for the advancement of alternative proteins and gaps and opportunities within the scientific community, and support research on these topics.
  3. This new proposal by Sam Rodriques and Adam Marblestone. They suggest finding neglected scientific or technological areas that have a high potential for increasing progress or improving welfare, and systematically create "Focused Research Organisations" to target these.

Interventions in the academic ecosystem

Ideally, we could improve how science is being aligned to do more societal good. It can be done in many different ways such as directing major funding mechanisms, changing cultural norms, or by improving the transition to technology. I don't feel like I understand this well enough yet to say anything interesting in this post, but I'm very interested in pursuing this further and seeing what the community makes of it.

Sorted by Click to highlight new comments since:

Thanks for writing this up Edo! We are are definitely thinking along similar lines and should talk more about this some time!  

In short, I think that EA should start to actively seek to influence academic research and funding norms as a means to ensuring that this industry produces more socially relevant research and rhetoric that will influence key decision makers. 

I think that EA can do this by creating lots of academic evidence (e.g.,  credible 'published' research) and some of which will just involve amplifying existing work (e.g., converting high quality EA research into academic paper format). 

This will require collaboration and co-design on mutually beneficial projects between academics (who can and want to publish research but often lack good  data collection opportunities) and practitioners (who often have good data and benefit from publishing, but lack the ability or incentives to justify that effort).  Happy to unpack that a bit more in the future when I have more time!

Looking forward to seeing that unpacked :)

Thank you for writing this Edo, it's really interesting to read about these topics as someone who's not really knowledgeable in research and academia. 

"it's not clear to me how much productivity loss is there when scientists are working on stuff they are less intrinsically interested in. The situation seems to be fine in commercial companies..."

I would assume there's a major difference in why most researchers in academia do what they do (interest and sheer curiosity, along with prestige) and why most professionals in the private sector do what they do (money and career development). This is not to say you're not right about that, but I think it's important to keep in mind the difference in the motivation that drives people's work in different work environments.



Yea, I could have made that clearer but that was exactly my point :) If other incentives seem to work just as well, then we can perhaps change the motivation source to something which overall does more good. I think this is an interesting question and definitely sits deep in this debate.

I think another interesting example to compare to (which also relates to Asaf Ifergan's comment) is private research institutes and labs. I think they are much more focused on specific goals, and give their researchers different incentives than academia, although the actual work might be very similar. These kinds of organizations span a long range between academia and industry.

There are of course many such example, some of which are successful and somre are probably not that much. Here are some examples that come to my mind: OpenAI, DeepMind, The Institute for Advanced Study, Bell Labs, Allen Institute for Artificial Intelligence, MIGAL (Israel).

Thanks for the write-up. Regarding the issue of loss of motivation when scientists work on research they are less intrinsically interested in: 

I know of at least one large scale historical experiment which did this. In the Soviet Union, science was reorganized to investigate areas specifically expected to increase social welfare (sadly sometimes the conclusions were predetermined by party cadres). This quote from an overview article seems relevant: 

Under the Bolshevik rule, scientists lost much of their autonomy and independence but acquired more social prestige and de facot influence on politically important decision making. The Soviet regime valued science more highly and allocated it a proportionally larger share of the national income than did contemporary governments in economically better developed and more prosperous countries. It strongly opposed the ideology of pure science, promoting instead the ideal of science as potentially usable- even if not always immediately applicable- knowledge about the world.  

https://www.jstor.org/stable/40207005?seq=8#metadata_info_tab_contents (page 122) 

It might be worth looking into how and whether this actually worked to produce good research. 

Thanks! This seems very relevant. I think that my guess before glancing at this would have been that the Soviet science was affected negatively by restricting pure science, but it seems to be much more complicated than I'd naively thought. I'll definitely look more deeply into this.

Curated and popular this week
Relevant opportunities