Introduction

In this post I explore potential strategies for improving the choice of research questions in scientific research. This is part of a project for which I have received funding from the EA Infrastructure Fund, and a followup on my problem mapping for scientific research. A huge thank you to Edo Arad, James Smith, David Janku and Michael Aird for thoughtful comments and input.

The post is written mainly for people who are thinking about using their time and resources to try to improve the scientific research system on a broad level, for example by starting new organisations, by influencing the development of existing organisations, by doing meta-research or by funding new initiatives through grant-making. It is probably less relevant for an individual researcher who is thinking about their own choice of research questions (though feedback from researchers would be very appreciated).

Throughout the post I’ll use the thinking emoji 

to mark out sections where I describe what I think could be an opportunity for a new project to make these easy to find when scrolling through.

My main takeaways after working on this post are:

  • Influencing the direction of research seems more pressing, and more neglected, to me than working on improving other aspects of the research system.
  • Influencing research policy seems like a potentially valuable and rather unexplored area.
  • I also think it could be worth looking into influencing academic culture and grantmaking criteria to improve priorities between fields and projects.
  • Improved roadmapping resources through online databases or an expanded type of review articles could support better decision-making, particularly for actors that are already value-aligned.
  • I think people who consider starting initiatives in this area ought to think very carefully about if their project would be impactful enough if it succeeded to be worth the effort and the high risk of failure (Ozzie Gooen’s post on Flimsy Pet Theories, Enormous Initiatives comes to mind).

Note that when I write about “scientific research” or “science”, I mean this in a broad sense that is not meant to exclude the social sciences.

Selecting the “right” research questions

Why this is relevant

Selection of research questions occurs at many levels, from the most general when high-level political decisions are made about prioritizing e.g. climate research, to the most specific when an individual researcher formulates a research question for an upcoming project. In between these extremes are the decisions made regarding funding, hiring and publishing, where the peer review process often plays a central part.

Selection and framing of research questions are key to setting the direction of scientific research, and thereby have a huge influence on how much value is produced by the resources spent on it. While hopefully this post can have some value for explicitly EA-aligned research, my main focus is on influencing the general scientific research system. For reference, a recent estimate is that $46 billion is now committed to EA (including everything, not just research) while an estimated $1700 billion is spent annually on general scientific research and development. While we hopefully can expect EA-funded research to generate more impact than average per dollar spent, research that happens outside EA could easily be as important or more important to the direction that history will take.

As a definition of what a good or “better” research question would be, I will stick to my previous position that better science is science that more effectively improves the lives of sentient beings (keeping improved  “understanding the universe” as an important instrumental goal). I’ve come across some interesting criticism of that position by Philip Kitcher in his book Science in a Democratic Society, which is well worth a read for anyone who wants to explore more complex proposals for how to set priorities in scientific research, but for the purpose of this post I’ll use the term “value-aligned” to signify aligned with the aim of improving the lives of sentient beings (which I understand as aligned with the typical values of the EA community). 

To be a bit more specific, I would think that science would produce more value if more priority was given to research that meets the needs and preferences of the global poor as well as the needs and preferences of future people.

Why I think it’s more pressing to improve what we do research on than to improve quality and efficiency in research

Despite the importance of research questions, it seems like this area is getting much less attention than other aspects of “improving research”. There is a growing reform movement in science, generally going by the label metascience, with conferences such as Metascience and Aimos and organizations such as Center for Open Science or METRICS, but the focus of these initiatives is not about setting priorities for what we should try to learn in scientific research but rather to improve the quality, reliability and efficiency of science. The problems in focus are for example how to make sure that published results can be reproduced by a different research group, or how to make sure that statistical methods are used in appropriate ways, or how to best share results and data. These are important and interesting challenges, but if the research questions themselves are not relevant and valuable to answer, it does not really matter how reliable or accessible the results are.

I would argue that it is more pressing to work on improving the direction of research by improving what fields and research questions are prioritized, than to add further resources to improving the quality and efficiency of research overall. 

This is mainly based on two assumptions: 

1) I think there is a lot of room for improvement on how we prioritize different research areas and research questions. This means that additional work spent on quality and efficiency might be wasted on research areas that are not very valuable anyway.

2) I think the existing metascience reform movement has good momentum and will be able to make progress on quality and efficiency. This means that the marginal utility for additional work on these issues is less, even when applied to a comparatively valuable research field.

I have noticed that also within EA, a lot of people who are interested in improving science are drawn to classical metascience issues such as open access or reproducibility. What I would like to see in these cases is really explicit theories of change, stating the logical steps all the way to the ultimate objective of an initiative. I think there is a risk that people who (like myself) have had disappointing and frustrating experiences of academic research sometimes fail to take the necessary step back to consider that what has been most frustrating in our personal experiences might not be the things that are most in need of fixing.

My guess regarding why the (non-EA) metascience reform movement focuses on quality and efficiency rather than on priorities is that this has to do with the tradition of “value-free” science and ideals of scientific freedom. It is hard to discuss improvements to research questions without getting into a discussion of values - how do we determine what research is most valuable, and who is to make that decision? I think many researchers find it more comfortable to address quality issues or open access than to ask fundamental questions about what research is valuable to pursue at all.

There is a previous, very interesting, forum post called Disentangling “Improving Institutional Decision-Making" which differentiates between technical improvement of decision quality on the one hand and improvement in terms of greater value-alignment on the other hand. When I approach improvement of research questions both of these perspectives apply, though I will focus mainly on value-alignment. As an example, improved value-alignment could be if a grantmaker in health research would move from a focus on national disease burden to global disease burden. An example of technical improvement of decision quality could be if researchers got access to better information on what research projects were already underway, so that they could make more informed decisions in their selection of research questions.

Another related post is Improving the future by influencing actors' benevolence, intelligence, and power. Improvement of research questions would correspond to improving the benevolence and/or intelligence of the actors that influence the scientific research agenda.

What could we do to improve the choice of research questions?

In the upcoming sections, I suggest some possible strategies for improving the choice of research questions. The first sections deal with research policy, grantmaking, and academic culture, where I believe there could be opportunities to improve the value-alignment or benevolence of important decisions and incentive structures. The final section covers “roadmapping”: improvement of the understanding of current knowledge landscape and knowledge gaps, which could have some opportunities for value-alignment but probably more potential for improving the decision-quality or intelligence of decision makers that are already reasonably value-aligned.

Research policy

Influencing which areas get priority

Research policymaking is decided both at national levels and in bodies such as the EU, the UN and the OECD. Research policy documents generally highlight specific priority areas to which a lot of funding is then directed - see for example the current research and innovation strategy by the European commission, which guides the spending of €95.5 billion in the EU funding programme Horizon. Priority areas can be defined on a very general level, such as promoting circular economy, or on a more specific level, as when the World Health Organization makes a list of “priority pathogens” to focus the development of new antibiotics.

Influencing which areas of research get priority in this way seems like an important opportunity to improve the value-alignment of research questions, given that we could identify changes that would be desirable. From my personal experience working with research funding in a limited field (antibiotic resistance), I have the impression that it can be rather straightforward to identify desirable changes at least within a field. This is not to say that it would be easy to figure out the optimal resource allocation within the field, but if the current situation is far from optimal it might not be so difficult to identify at least some changes that would be improvements (such as prioritizing funding for projects that address the disease burden of low-income countries).

I am uncertain about the tractability of identifying desirable changes in resource allocation between fields, but it might be a similar situation where one can figure out some plausible improvements even though the optimal allocation is uncertain. For example, one might push for changes that would benefit AI alignment research or biosafety.

Something interesting about research policy is that it doesn’t seem to get a lot of media attention and is rarely the topic of political debate (with some exceptions, see next section), while at the same time these policies seem like they could have a large impact on society. This might be an indication that influencing research policy could be a rather tractable way to influence both the near term and the long term future - if other stakeholders don’t have strong positions on what research policy should look like, it should be possible to influence the outcomes. 

One type of established lobbying towards research policy worth mentioning is the representation of patient interests in health related research. An example is the charity James Lind Alliance that combines the perspectives of patients, clinicians and carers to identify what they think are the most important unanswered research questions, but there are also many patient advocacy organizations that focus on a specific type of condition (e.g. American Cancer Society or American Heart Association).

 

Identify some desirable changes in broad science policy and assess how impactful they would be in expectation.

Investigate what allocation seems desirable between research with unpredictable ends, and research aiming to solve specific problems or challenges (roughly the explore-exploit dilemma of research agenda setting). This should include a review of previous work on this question.

Identify the most significant institutions for shaping global research policy and investigate how their decisions are made, the size of their budgets and their priority areas. This should include a survey of previous research on science policy development (e.g. by SPRU).

Investigate the implementation of previous research policies to understand how the wording of policy documents translate into specific funding allocation, research project proposals, and research results and publications.

Develop estimates for cost effectiveness on science policy work - how much resources seems to be needed to achieve relevant changes? How does this compare to other policy work, or to other research improvement work? Study the track record of existing organizations that has attempted to influence science policy (e.g. patient advocacy organizations).

Identify the most relevant career paths for influencing research policy.

Influence political attitudes and legislation

While some areas of research become politically popular and prioritized for public funding, the other end of the spectrum would be areas that become controversial, unpopular, regulated or banned. This can make certain research questions unattractive or impossible to pursue even though they might be important and potentially impactful to work on.

One example is research on psychedelic substances as potential drugs for conditions such as treatment-resistant depression or addiction. This seems like a field that could potentially achieve huge gains in welfare if such treatments turned out to be effective, but the research in this field has been extremely limited by narcotics legislation. NGOs such as the UK-based Beckley foundation and the US-based MAPS appear to have played important roles in the process of making such research possible once again.

Other areas, while not restricted by law, might just be perceived as unattractive to fund and support. Geoengineering research seems like one such area: a recent example is how local protests stopped a test in northern Sweden. Such popular opposition to a field of research could potentially also lead to legislation against it.

 

Study what strategies have been successful for affecting policy on psychedelic research. Are there lessons that would be useful for making controversial but important geoengineering research more feasible.

Influencing funding at grantmaker level

At the grant-maker level I believe that there is often opportunity to improve both value-alignment and decision-making quality. Grant decisions are made differently at different funding agencies, but a common process is to use some kind of peer review board that consists partly or completely of other researchers (“peers”). The reviewers would be expected to have gone over application materials in advance of the meeting, and to make the decision together during the meeting. There are often explicit criteria for project selection that still leave lots of room for interpretation (e.g. “intellectual merit” that is a criteria for NSF grants). The discussion could be more or less structured depending on the funding agency and the participants themselves, but it seems fair to say that the decisions are often influenced as much by the social dynamics of the group and the personal preferences of the individuals as by rational reasoning and criteria (the book How Professors Think gives detailed insight in the process of social science review boards in the US for anyone who wants a deep dive).

Peer review has been criticized on many accounts, such as racial bias, inconsistency, inefficiency and risk-aversion. There are several existing initiatives to try to innovate and improve the grant-making process, for example by using lotteries, but although these would affect the incentives that researchers are exposed to, the focus is rarely to influence the direction of research or the selection of research questions. In other words: these initiatives are about improving intelligence or “technical decision quality”, rather than improving the benevolence or value-alignment of grant decisions.

To improve research questions in a way that would promote welfarist value creation would probably involve work on selection criteria, both in terms of setting criteria that are value aligned and in terms of making sure the selection process implements the criteria in a reliable and constructive way. My personal experience of assisting in work on research applications is that there is also a very real risk that well-intended and fundamentally constructive criteria can turn into counter-productive micromanagement, so this type of work should be done with care.

 

Investigate what decision criteria and decision processes are used by the most important research grantmakers.

Assess the expected impact of different existing (or novel) grant-making processes on research question selection and on the direction of research.

Assess the expected outcomes of changing the composition of review boards, for example by involving more non-experts in the grant review process to represent a wider or different range of values and priorities.

Develop and advocate for improved decision criteria and processes to be used by research grantmakers.

Influence academic culture: Promoting discussion on values and prioritization

Currently, the reform movements in scientific research focus on value-free concepts such as quality, reproducibility and transparency. Could we imagine similar movements that promote a selection of research questions that correspond better to global and future needs, and if yes, would that lead to an improvement to the current situation? 

The previously mentioned science philosopher Philip Kitcher proposes that the setting of the research agenda and prioritization between different fields and research questions should be done in discussions that involve a representative group of the (global) population who has been tutored to understand the complexity of the issues. Though this seems practically very difficult to achieve, his writing might be taken as a general argument for a more explicit discussion of values and priorities that should take the interests of all stakeholders (including the global poor and future people) into account. 

Kitcher does not propose welfarism as the ultimate goal, arguing that people also value other things than welfare and that such values (e.g. curiosity) should be accounted for. However, it seems to me that a shift in the direction of involving all stakeholders would still result in a much higher alignment with “EA values'' than the current situation. I think there are (at least) two significant challenges with Kitcher’s approach: firstly, to establish a discussion where the interests and values of all stakeholders are represented in a reasonable way, and secondly, to make sure that everyone in the discussion understands the science well enough to have a meaningful discussion. Still, even if it cannot be done perfectly, some level of change in this direction might be valuable.

A more limited but practical approach is taken in a recent conference paper co-authored by Alexander Herwix (that also references Effective Altruism) that proposes a framework for discussion of research question selection. They argue for a more systematic approach both on the level of specific research projects and on the broader level of setting a direction for a larger research programme. The paper is on information systems research but the framework seems like it would be applicable for other fields as well. The framework is similar to a business model canvas and provides a basis for discussion where the outcome would depend on the values or priorities of the participants, but the framing nudges toward value-alignment by introducing criteria such as scale and tractability.

My best guess would be that a more explicit discussion regarding values and prioritization in science would lead to higher value alignment. This is based on me believing that most researchers would not intentionally and explicitly disregard the needs of the poor, the underprivileged or of future people. I don’t think the consequences of such a discussion are obvious though, and there is a possibility that it could backfire if complex questions become politicized and reduced to twitter discussions that in turn makes science policy more political and less tractable to work with.

 

Do a review of what contexts exist already where academics and/or non-experts are involved in discussions about the research agenda and selection of research questions.

Try out setting up discussions on research priorities with selected groups of academics, for example with organizations such as Global young academy, to get a better understanding of how such discussions could develop.

Investigate research using animal experiments as a test case for tutoring non-researchers to be able to participate in a discussion about the value of answering specific research questions. The knowledge-gap between laymen and researchers in ethics boards is known as an obstacle for meaningful discussion in this context, and this could be a test case for developing training material to improve such discussions. The fact that there is already an established forum in the form of ethics committees and laypeople who are willing to dedicate time to this work is an advantage, though the political (emotional?) sensitivity of animal experiments could be a disadvantage.

Roadmapping: Increasing the understanding of current knowledge, ongoing research and relevant knowledge gaps

For decision makers that are (reasonably) value-aligned, it could make sense to improve their decision-making with regards to research questions simply by improving their access to clear information regarding the current knowledge, current ongoing research and relevant knowledge gaps that could be explored.

I expect initiatives in this section to be most impactful if they were to target some particularly value-aligned field of research or decision maker. It could also be possible though to implement initiatives in a way that nudges decision makers towards value-aligned conclusions, for example by promoting the use of key indicators or measures that make such values salient.

 

Analyze the level of value-alignment of one or several major funders in a research field. Does it seem like value-alignment or decision-quality is a bottleneck for impact? How much could the impact of their resources be improved by improving value-alignment and/or decision-quality?

Review articles

The standard way of mapping current knowledge in a field is through a review article that surveys and summarises previously published studies. Review articles play an important role in getting newcomers up to speed with a topic and are generally more accessible (as in, easier to understand for a non-expert) than the surveyed research articles themselves. Additionally, review articles often comment on the quality and robustness of the underlying studies which is helpful for non-experts who want to use the results to inform for example policy or grantmaking.

Many review articles go further and identify “knowledge gaps” or suggest areas for further investigation, using implicit or explicit value-judgements about what additional research would be most valuable. It is unclear to me to what extent such recommendations actually influence the direction of subsequent research, but it seems plausible that it should have some influence simply by pointing out tractable directions. Possibly they also influence funding decisions since the review article itself could be used to support grant applications on these topics.

 

Investigate to what extent the identification of knowledge gaps or recommendations for further research influence subsequent research projects

In the current format I’m not convinced that it would be an especially good use of resources to simply prioritise doing more review articles in general, but it might be valuable for researchers who co-author review articles to influence the value-alignment of a knowledge-gap analysis or further research recommendations. An EA organization could potentially provide support and feedback for such work.

Establishing “R&D Hubs” for up-to-date information on ongoing research

A different way to establish an overview of a research field is to establish platforms, “R&D Hubs”, that offer up-to-date information on ongoing research projects. An example from global health is the Global AMR R&D Hub that maps up most of the ongoing research projects related to antimicrobial resistance worldwide. In this case, the information is collected from funding organizations. Long before the conclusion of a project, it’s possible to access the project title, the name of the PI, funding amounts and in many cases a brief project description. Dimensions is a project in the same direction but that aims to include all fields of research, and Clinicaltrials.gov provides information on clinical trials.

This type of platform seems most valuable in fields that have a high level of value-alignment but where coordination is a bottleneck. For a researcher it could make it easier to identify potential collaborators and to make sure they don’t unintentionally duplicate other projects, and for a grant-maker it makes it easier to evaluate where there seems to be more or less funding available from other sources. 

 

Evaluate existing platforms that provide information on ongoing research: Who uses them? Do they have any impact on decision-making, and if so, how?

It might be valuable to create knowledge gaps platforms listing promising research project ideas (for EA research, or for other fields).

Develop a new type of research field survey/review concept, combining the accessible, easy-to-grasp properties of a review article with the up-to-date and more comprehensive scope of an online database or R&D Hub.

This could in addition to published research include information about ongoing research projects that have not yet led to publications by sourcing information from preregistered research questions, information from research funders about what projects they have granted funding for, as well as preprints of unpublished research. Another possibility that could be worth looking into could be to somehow include previously unpublished knowledge regarding failed experiments, as an alternative way to manage positive publication bias.

Note that this format would not automatically promote the selection of more value-aligned research questions, but adding a value-aligned knowledge-gap analysis of the field could potentially influence the decisions of the readers in this direction. 

My thoughts about publishing

Scientific publishing comes up very frequently in discussions about what is wrong with scientific research: journals are exploiting researchers that supply peer review services free of charge, they charge too much for access and they pay no royalties on the material they publish (see for example this article if you are new to the topic). This doesn’t seem great, but I’m not convinced so far that publishing reform should be a priority for those who want to improve scientific research.

If the issue is that publishers make too much money without providing value for it, that might be unfair but not enough reason to believe that reforming publishing would fundamentally improve the value production of scientific research.

One might argue that the most prestigious journals in a field are setting the research agenda by selecting what to publish, and that they do this in a way that selects for less valuable research. I think, however, that this would not really be a problem if grantmakers and university hiring committees had sound criteria for decision-making.

I know that there are people who disagree strongly with me on this point, and I would love to see a post making the case for how reform of scientific publishing could improve the research system in more fundamental ways than simply reducing waste or improving efficiency.

Also: I do think that open access is valuable, especially to enable research outside of rich universities, but the bottleneck there again seems to be the acceptance by the research community (that those who recruit for prestigious academic positions should be known to value high-quality open-access publications) rather than a lack of open-access journals. 

Conclusion

To sum up, I think there are a number of interesting ideas (marked by the thinking emojis throughout the post) worth exploring in this area, especially related to increasing value-alignment of the research agenda.

I think it is generally difficult to change the dynamics of the research system, and that it is therefore very important to think through if a certain initiative is promising enough to be worth the effort. If an initiative succeeds in changing the system, that change might not be reversible: therefore we should also think in advance about possible unintended consequences.

I would love to get in touch with more people interested in these issues - do reach out if you consider working on any of these ideas or just would like to have a chat!


 

60

13 comments, sorted by Click to highlight new comments since: Today at 4:25 AM
New Comment

"There is a possibility that it (a more explicit discussion regarding values and prioritization in science) could backfire if complex questions become politicized and reduced to twitter discussions that in turn makes science policy more political and less tractable to work with."

Strongly agree with the risk of backfiring, and I think this is more likely than things going well.

I think if we promoted explicitly value-driven science or discussion of it, the values that drive research priorities are more likely to become 'social justice values' than effective altruist values, leading to a focus on unsystematically selected,  crowded and intractable cause areas, such as outcome inequalities amongst ethnic groups and sexes in rich English-speaking democracies.  This is because these are the values more likely to be held by the people setting research priorities, not effective altruist values. I also think a change in this direction would be very difficult to reverse.

I think a better idea would be to selectively and separately campaign for research priorities to shift in predefined directions (i.e - one campaign for more focus on the problems affecting the global poor, another campaign for future generations and another campaign for animals).

Thanks for your comment! I'm uncertain, I think it might depend also in what context the discussion is brought up and with what framing. But it's a tricky one for sure, and I agree specific targeted advocacy seem less risky.

Thanks for writing this post and your efforts to address these issues! As someone who works in  scientific research I have frustrations about the framework/criteria by which science is funded- so I'm really glad someone connected to effective altruism is looking into this, and I agree that this a really important cause area!

Often the criteria used to judge proposals is based on things like how innovative, novel, timely the science is, if it uses cutting-edge methodology and how suited the candidate is to that study area. Also, as a reviewer of proposals I am asked to judge the proposal's 'excellence'- a quite ambiguous quality- that accounts for significant randomness and bias too.

I think this approach to judge proposals is a limited as it means that a) potentially more important or impactful work will not be funded as it may lack novelty. b) people coming from a problem from a different discipline with differing expertise-yet many scientific breakthroughs come from those from other fields. c) it can lead funding of niche fields of science, to the detriment of neglected areas of greater importance and wider scale tractability. 

Incorporation of the ITN framework would be beneficial, it may help fund more controversial or higher risk science and making decisions based on lotteries, after an initial sift, may also open up the field to more creativity and diversity, and counter some of the other biases that sadly occur. So I really agree with you here.

These issues above relate to the more open calls for proposals and how they are judged, but there are also many schemes one can apply to that are quite narrowly defined and have been decided by relatively few senior academics on a science board. It is not clear to many in the community how these grant calls are are decided, nor how rigorous and unbiased these are, making this process more transparent would be beneficial too. Your ideas based around cost-effectiveness and expected value might also provide more rigor to these decisions.

Anyway- really interested to see what you do with this, and let me know if I can be of help into the UK research council (UKRI) system. I'm currently part of the peer-review college for NERC and can find out more about specific protocols/decision making if useful.

Thanks a lot for your comment and offer! I'll send you a message =)

Thanks for this! I have had several similar, but much less developed, ideas. This post was therefore quite helpful as it expanded those ideas and suggested tangible projects to progress them.  I am also pleased to see that more and more people in the EA community appear to be thinking about innovating research. For me, evidence from research seems so critical to the EA evidence pipeline that we should be particularly concerned with how well research is done. 

Of all the ideas you propose, which one or two have the highest expected ROI (considering tractability) in your view? If you had to fund one project (it could be your own) what would that project aim to do?
 

Thanks, great to hear =)

I’m quite unsure about which ideas has the best ROI, and I think it would depend a lot on who was looking to execute a project which idea would be most suitable. That said, I’m personallymost excited about the potential of working with research policy at different levels - from my current understanding this just seems extremely neglected compared to how important it could be, and if I’d make a guess about which of these ideas I might myself be working on in a few years it would be research policy.

Short term, I’d be most excited to see the projects happen that would provide more information (e.g. identify the most important institutions, understand how policy documents actually translate into specific research being done, understanding the dynamics of exisiting contexts where experts and non-academics discuss research agenda, evaluation of existing R&D hubs, etc) - with this information avaliable I would hope it would be possible to prioritise between different larger projects.

I’m curious, what would you yourself think would be most important and/or have the best ROI?

Quick thoughts: 

Thanks for responding.  I generally agree! I also struggle to pick out an obvious highest priority choice.  Two I liked were:

Identify the most significant institutions for shaping global research policy and investigate how their decisions are made, the size of their budgets and their priority areas. This should include a survey of previous research on science policy development (e.g. by SPRU).


Investigate the implementation of previous research policies to understand how the wording of policy documents translate into specific funding allocation, research project proposals, and research results and publications.

I also like the idea of more reviews to 'legitimate' new lines of research - this is something I have often tried to do in my own research.

I think that some sort of community building might be one of the highest ROI activities that is being missed. The possible projects and their impacts are all going to be heavily mediated by the capacity of the community/available human capital. One reason why I like seeing these posts and the ongoing Slack conversations etc!

I consider the issue of identifying knowledge gaps (and the semi-opposite concept: arguments/experiments which are highly influential in a field) as another good potential use case/benefit of epistemic mapping: both the importance of and lack of research on a concept may become more apparent with visualization--especially if the map/graph shows that the claim or assumption is very influential in the literature but it has not received much support or scrutiny. I'd be curious to get your thoughts on the potential usefulness/viability of such a project for such purposes.

Interesting. I think a challenge would be to find the right level of complexity of a map like that - it needs to be simple enough to give a useful overview, but complex enough that it models everything that's necessary to make it a good tool for decisionmaking.

Who do you imagine would be the main user of such a mappning? And for which decisions would they mainly use it? I think the requirements would be quite different depending on if it's to be used by non-experts such as policymakers or grantmakers, or if it's to be used by researchers themselves?

I agree the complexity level question is a tough question, although my impression has been that it could probably be implemented with varying levels of complexity (e.g., just focusing on simpler/more-objective characteristics like “data source used” or “experimental methodology” vs. also including theoretical arguments and assumptions). I think the primary users would tend to be researchers—who might then translate the findings into more-familiar terms or representations for the policymaker, especially if it does not become popular/widespread enough for some policymakers to be familiar with how to use or interpret it (somewhat similar to regression tables and similar analyses). That being said, I also see it as plausible that some policymakers would have enough basic understanding of the system to engage/explore on their own—like how some policymakers may be able to directly evaluate some regression findings.

Ultimately, two examples of the primary use cases I envision are:

  1. Identifying the ripple effects of changes in assumptions/beliefs/datasets/etc. Suppose for example an experimental finding or dataset which influenced dozens of studies is shown to be flawed: it would be helpful to have an initial outline of what claims and assumptions need to be reevaluated in light of the new finding.
  2. Mapping the debate for a somewhat contentious subject (or just anything where the literature is not in agreement), including by identifying if any claims have been left unsupported or unchallenged.

It seems that such insights might be helpful for a researcher trying to decide what to focus on (and/or a grantmaker trying to decide what research to fund).

Cool - my immediate thought is that it would be interesting to see a case study of (1) and/or (2) - do you know of this being done for any specific case? Perhaps we could schedule a call to talk further - I’ll send you a DM!

I also mention that I think that mapping could be very useful, particularly for onboarding new people getting involved and increasing the overlap of understanding between those already involved. It can also help to pick out key behaviours/actors to target. I am thinking something like this diagram, as an example

I agree that such mapping seems like it could be very useful for catching people up to speed on a complicated/messy topic more quickly, particularly since I see it as a more efficient/natural way of conveying and browsing information about a controversial or complicated question. 

As for the diagram you mentioned, I think that something like that might be helpful, but personally I consider that a map for broader academic questions should probably use semantically-richer relationship information than just (+) and (-). For an example of what I mean, see the screenshots of diagrams I posted in my post on the subject.