Should marginal longtermist donations support fundamental or intervention research?

by MichaelA15 min read30th Nov 20204 comments

38

EA FundingDonation WriteupExistential riskCause Prioritization
Frontpage

One question EAs face is how to allocate scarce resources between figuring out what to do (i.e., research) and actually doing it (i.e., implementation). A further question is how to allocate those resources which are allocated to research between what we could call “intervention research” - e.g., quantifying and comparing the cost-effectiveness of different interventions - and what we could call “fundamental research” - e.g., identifying crucial considerations and evaluating their implications.

This post focuses on a related, more specific question: Within the cause area of longtermism, should the next million dollars of EA research funding be allocated to intervention research or to fundamental research (assuming we could only allocate to one of those broad categories)?[1]

That said, most of what I write should also be relevant to other amounts of marginal funding, and perhaps outside of longtermism.

This post is adapted from a work test I did for Rethink Priorities, but the views expressed are my own, as are any errors.

Epistemic status/effort: I spent only around 5 hours on the work test and around 3 hours later on editing/adapting it, though I happened to have also spent a bunch of time thinking about somewhat related matters previously.[2]

Key takeaways

  1. It’s most useful to distinguish intervention research from fundamental research based on whether the aim is to:
    • better understand, design, and/or prioritise among a small set of specific, already-identified intervention options, or
    • better understand aspects of the world that may be relevant to a large set of intervention options (more)
  2. Within longtermism, two key arguments for allocating $1M to fundamental rather than intervention research are that:
    • Our fundamental understandings are probably inadequate, and improvable by fundamental research (more)
    • Marginal funding for fundamental research is probably still quite useful for improving our fundamental understandings (more)
  3. Within longtermism, some arguments for allocating $1M to intervention rather than fundamental research are that:
    • Implementation research may “pay off” faster, and this could be important - particularly if, within decades from now, “leverage over the future” will decrease or “windows of opportunity” will close (more)
    • Better fundamental understandings among researchers might have little influence anyway (more)
    • There may be reputational harms from longtermism seeming overly abstract (more)
    • Intervention research may provide better feedback loops, which may currently be very valuable (more)
  4. I’m moderately confident that, from a longtermist perspective, $1M of additional research funding would be better allocated to fundamental rather than intervention research (unless funders have access to unusually good intervention research opportunities, but not to unusually good fundamental research opportunities) (more)
  5. Several key uncertainties related to this area remain, and I expect substantial progress could be made on them with even just a few additional hours of thought (more)

1. Distinguishing intervention and fundamental research

My distinction between intervention and fundamental research is somewhat similar to the common distinction between applied research and basic research:[3]

Basic research, also called pure research or fundamental research, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques which can be used to intervene and alter natural or other phenomena. (Wikipedia; emphasis in original)

So applied research focuses on specific technologies or techniques for intervening on various phenomena. Analogously, intervention research (as I use the term) aims to better understand, design, and/or prioritise among a small set of specific, already identified intervention options. For example, one may do research related to the theories of change for, best practices for, and/or cost-effectiveness of one of the following:

  • specific deworming or antimalarial interventions
  • specific cultivated meat or vegan advocacy interventions
  • specific AI alignment techniques
  • specific “alternative food” options (to be used if the sun is obscured and/or industry is disabled)

In contrast, basic research focuses on improving general theories for understanding and predicting various phenomena. Analogously, fundamental research (as I use the term) aims to better understand aspects of the world that may be relevant to understanding, designing, prioritising among, and/or discovering any of a large set of intervention options. For example, one may do research related to:

  • why disease is more prevalent and harder to address in developing than developed countries
  • the drivers of and barriers to public support for animal welfare interventions
  • when and how transformative AI may be developed
  • the likelihood of various large-scale catastrophes

It’s worth noting that I’m drawing this distinction based on the aim of the research, not its methods or its object-level focus. This is primarily because the best way to achieve the goals of fundamental research may range from (1) very theoretical, abstract, big-picture, macrostrategy work to (2) very empirical, concrete, granular investigation of specific phenomena.[4] Furthermore, the phenomena in question may sometimes happen to be specific intervention options, and the investigations may involve constructing cost-effectiveness analyses or other models.

For example, let’s say a researcher investigates a specific AI alignment technique. This is intervention research if the researcher does this in order to better understand or implement that technique, or to decide how many resources to allocate to that technique. But this is fundamental research if the specific AI alignment technique is just being used something like a “case study”, in order to build up more detailed models of things like:

  • what tend to be the “gears” of how alignment techniques in general work
  • what the key barriers to such interventions tend to be
  • what the investigator, or people more generally, are still confused about in relation to such interventions[5]

See here for more on distinguishing intervention and fundamental research.

2. Arguments for directing marginal longtermist money to fundamental research

2.1 Our fundamental understandings are probably inadequate, and improvable by fundamental research

Todd (2020) writes:

I’ve come to better appreciate how little we know about what longtermism implies. Several years ago, it seemed clearer that focusing on reducing existential risks over the coming decades—especially risks posed by AI—was the key priority.

Now, though, we realise there could be a much wider range of potential longtermist priorities, such as patient longtermism, work focused on reducing risk factors rather than risks, or other trajectory changes. There has been almost no research on the pros and cons of each.

There are also many crucial considerations, which could have huge implications for our priorities. I could see myself significantly changing my views if more research was done.[6]

Todd gives this as a reason why global priorities research seems very important. It can also be seen as a reason why fundamental research is very important. The more our fundamental understandings are inadequate or likely to change, the more we should favour fundamental research relative to implementation research, because:

  1. That makes it likelier that the intervention research we currently think is valuable would turn out to be unimportant
    • This could happen if we’d be deeply investigating the wrong interventions, or if our intervention research would neglect crucial considerations or be guided by false assumptions
  2. That makes it likelier that the intervention research we’d do, or decisions based on it, would turn out to be harmful
  3. That makes it likelier that fundamental research would provide insights that’d help us better guide our intervention research and actual decisions[7]

One could counterargue that our fundamental understandings may in fact now be fairly adequate, or at least unlikely to change. I find that counterargument somewhat plausible, but not very strong, because it seems to me that:

  • Most longtermist ideas and analyses are quite young, and haven’t been subjected to large amounts of scrutiny (e.g., many have only been written up as relatively brief blog posts; see Garfinkel, 2020).
  • It’s probably much harder to predict far-future impacts than nearer-term impacts, especially as the former predictions often relate to extreme and unprecedented events in very unfamiliar conditions.[8]
    • And yet even in many near-term-focused domains, such as global health or animal welfare, we arguably lack a strong, stable fundamental understanding.
  • My impression is that thoughtful longtermists tend to think we’re still deeply uncertain about many key considerations.
    • However, I haven’t collected systematic data. And I think that whether I subjectively classify a longtermist as “thoughtful” in the first place is probably influenced by whether they express that sort of view.

One could also counterargue that EA-funded fundamental research might be unlikely to substantially improve our fundamental understandings, or less likely to do so than other uses of resources would be. I find this second counterargument more compelling than the first one. This is essentially because it seems to me that:

  • Plausibly, gaining a better fundamental understanding is important but not tractable
  • Plausibly, particular types of intervention research or implementation would improve our fundamental understanding more effectively than many types of fundamental research would
  • Plausibly, improvements will mainly occur as a result of “exogenous learning”, meaning learning from advances that one did not bring about oneself
    • E.g., “advances in the scientific community [that were funded by other people], new philanthropic interventions being invented and/or tried out [by other people], moral progress, and more” (Hoeijmakers, 2020)

It might be valuable to think more about these questions of how inadequate and likely to change longtermists’ fundamental understandings are, and the extent to which fundamental research vs other things may improve those understandings (see my Key uncertainties and next steps).[9]

Altogether, the argument that our fundamental understandings may be inadequate and improvable by fundamental research pushes me substantially in favour of marginal longtermist research money being allocated to fundamental rather than intervention research. But this push is substantially smaller than it would’ve been if not for the counterarguments mentioned (primarily the second one).

2.2 Marginal funding for fundamental research is probably still quite useful for improving our fundamental understandings

Conditional on argument 2.1 being roughly correct, I expect that well-directed marginal funding for longtermist fundamental research would be quite valuable. This is largely because I haven’t seen strong indications that this area is very crowded, and I can think of a variety of plausibly valuable projects that are not being done but plausibly could be done given funding. For example, I expect that additional funding for the EA Long-Term Future Future Fund, GCRI, or ALLFED could lead to additional fundamental research being done, as well as helping build a pipeline of people who could do fundamental research in future.[10]

That said, I also assign substantial credence (perhaps 5-33%) to the claim that longtermist fundamental research will be well supplied without further funding, such that there are low returns to further funding. This could be true:

  • If longtermist fundamental research is already sufficiently well funded by EAs
  • If funding for longtermist fundamental research is likely to grow in future
  • If roughly equivalent the same work is being or will be funded by non-EA sources
  • If roughly equivalent work would often be done without funding anyway (e.g., by EAs writing blog posts in their spare time because the topics fascinate them)

I expect that another 10 hours of research and thought could give me a clearer sense of how much value there’d be in further funding for fundamental research (conditional on argument 2.1 being true), so I list this as a key uncertainty below.

3. Arguments for directing marginal longtermist money to intervention research

3.1 Speed and urgency

3.1.1 Implementation research may “pay off” faster

The period between research being begun and it having an “actual impact on the world” is likely longer for fundamental research than for intervention research. My quick guess is that there’s a 90% chance fundamental research takes, on average, between 2 and 60 years longer to “pay off” than intervention research does. I’d also guess that that guess broadly aligns with the views of most longtermists (see for example Rozendal et al., 2019).

I see two key reasons why fundamental research might take longer to “pay off”:

  1. It may take longer to conduct fundamental rather than intervention research.
    • I’m ~70% confident that this is true, but also ~70% confident the difference is smaller than 10 years.[11]
  2. The insights from fundamental research may tend to be less directly actionable than the insights from intervention research, and may often need to be supplemented by later fundamental and intervention research in order to inform decisions.
    • I’m ~80% confident that this is true, and wouldn’t be surprised if it created a gap of several decades in pay off times.

But it’s very hard to be sure about each of these claims, or even about precisely what I mean by them. This is for reasons including the heterogeneity within and overlap between the two categories of research, our uncertainty about what “actual impacts on the world” we want to have and how to achieve them, and the limited data on this matter (at least for EA-funded research and longtermism-relevant impacts).

3.1.2 Speed may matter

If fundamental research does tend to take longer to pay off, this could push in favour of intervention research. This is because there are three reasons why influencing sooner decisions may be better than influencing later decisions.[12]

Reason 1

If someone has a pure time preference, they’ll intrinsically care more about the same impact if it happens sooner rather than later. However, I think this consideration warrants very little weight, because my impression is that:

  • Most EAs, and especially most longtermists, don’t endorse pure time preferences on reflection
  • “Thought leaders” in longtermism argue strongly against pure time preferences (see, e.g., Appendix A of The Precipice)
  • Most moral philosophers would also argue against pure time preferences (see again Appendix A of The Precipice)
  • Even given some low rate of pure time preference, a gap of 2 to 60 years may not lead to discounting by an order of magnitude or more

Reason 2

“Leverage over the future” (also referred to as “hingeyness”) may decrease substantially over the next few decades or centuries. If so, then decisions made sooner may tend to matter more.

However, it seems very uncertain whether leverage over the future will actually decrease over the next few decades or centuries, how large that decrease would be, and how soon the decrease would begin. Furthermore, thoughtful longtermists seem divided on the issue. For a range of considerations and views, see MacAskill (2019), the comments on that post, and Aird (2020).[13]

Overall, and with great uncertainty, I assign perhaps 10-50% credence to each of the following (mutually exclusive) positions:

  1. “Leverage over the future will decrease substantially, starting within the next few decades”
  2. “Leverage over the future will decrease substantially, starting sometime between a few decades and a few centuries from now”
  3. “At least for the next 1000 years or so, leverage over the future will decrease only slightly, stay roughly the same, or will increase”

My credence in the first position pushes me notably in favour of intervention research relative to fundamental research. My credence in the second position also pushes me in favour of intervention research, but only slightly, as a 2 to 60 year gap may well not matter if position 2 is true.

Reason 3

The third reason why influencing sooner decisions may be better than influencing later decisions is related to the second reason, and could perhaps be seen as a subset. It’s that there may be “windows of opportunity” within specific domains that are open now, or will be open soon, but that’ll close over the next few decades or centuries. For example, certain frameworks, principles, or norms may soon be set in the area of AI policy, and this may make AI policy much easier to influence now than in future (see Moës, 2020). This could be seen as a decrease in leverage over the future within a specific domain, even if “average” leverage over the future doesn’t decrease.

This pushes in favour of intervention rather than fundamental research if intervention research is more likely to inform actions relevant to those windows of opportunity before those windows close. For example, we may wish to research which specific AI policy principles to push for, even if we still have relevant fundamental uncertainties, if we think resolving those uncertainties would take too long and that our “best guesses” are already better (in expectation) than what would be suggested by others (see Rozendal et al., 2019; Aird, 2020).

This consideration pushes me in favour of intervention rather than fundamental research, but only somewhat. This is because intervention research may only pay off a few years or decades faster than fundamental research, and it’s not clear whether there are many windows of opportunity that will close within that time.

3.2 Arguments I spent little time considering

These are arguments that, collectively, likely warrant a few more hours’ thought.

3.2.1 Perhaps better fundamental understandings among researchers would have little influence anyway

Even if one accepts arguments 2.1 and 2.2, one might think that key decision-makers are unlikely to attend to or understand researchers or their (fundamental research) outputs anyway. In that case, even an improved fundamental understanding may not actually result in valuable impacts on the world.

This is probably true in relation to some decision-makers. But I’d expect that at least some important decisions will be influenced by improved fundamental understandings among researchers, either relatively “directly” or via mediating actors. For example, I expect such understandings to sometimes influence the career advice provided by 80,000 Hours, the funding decisions made by the Open Philanthropy Project, and governmental decisions informed by policy advice from FHI or CSER.

It’s also worth noting that a similar argument could be made against intervention research, though perhaps with somewhat less force.

3.2.2 Perhaps there are reputational harms from seeming overly abstract

EA is sometimes criticised for being overly theoretical, intellectual, abstract, or focused on ideas. Additional fundamental research could contribute to those perceptions of EA.

That said, EA is also sometimes criticised for seeming overconfident, insufficiently rigorous, overly focused on certain narrow areas without good reason, and so on. Additional fundamental research, or the way our beliefs and actions change in light of it, could reduce those perceptions of EA. And I’d tentatively guess that most of the people and organisations who it would be most valuable to influence would be more worried if EA seemed overconfident and insufficiently rigorous than if it seemed overly abstract and idea-focused.[14]

3.2.3 Perhaps intervention research provides better feedback loops, and perhaps these are currently very valuable

Longtermism is young and is tackling tough problems, and many longtermist researchers have relatively little experience. It may thus be valuable to focus on building longtermists’ skills in everything from research to running organisations well. Intervention research may be more conducive to that than fundamental research, due to shorter feedback loops.

This seems plausible, but it again seems worth recalling that fundamental research can take many forms, including quite concrete and empirical forms.

4. Conclusion

I’m moderately confident that, from a longtermist perspective, $1M of additional research funding would be better allocated to fundamental rather than intervention research, unless funders have access to unusually good intervention research opportunities, but not to unusually good fundamental research opportunities.[15] (Recall that this is about the aims of the research; this post has not addressed what form that fundamental research should take. )

That conclusion is driven primarily by my beliefs that:

  • Our longtermism-relevant fundamental understandings are probably inadequate, and improvable by fundamental research
  • Marginal funding for longtermist fundamental research is probably still quite useful for improving our fundamental understandings

My confidence in that conclusion is tempered by the counterarguments outlined above. Most notably, it’s plausible to me that fundamental research:

  • isn’t very tractable
  • isn’t very neglected
  • is slower than intervention research in a way that may be problematic due to potential changes in leverage over the future or windows of opportunity

I’m also somewhat concerned by the fact that I’ve mainly identified and shallowly, qualitatively analysed a small set of considerations, and that my conclusion aligns with what I think I would’ve intuitively favoured beforehand. On the other hand, I think those intuitions are largely based on having thought about related matters previously, rather than just something like my personality.

5. Key uncertainties and next steps

Answers to the first four of the following questions could plausibly strengthen or reverse my above conclusion. Additionally, I think that even just a few hours of thought could be enough to make meaningful progress on these questions, and that it’d be worth doing so.

  1. How will “leverage over the future” (also referred to as “hingeyness”) change over time?
    • Aird (2020) lists, comments on, and provides links for a series of key questions relevant to how leverage over the future will change over time. Further work on this could use that as a starting point.
  2. To what degree are longtermists’ views about which interventions or broad areas to prioritise likely to change in future?
    • To shed light on this, one could investigate to what degree longtermists’ views on that matter have changed in the past (see also Todd, 2020, and the commenters on Daniel, 2020).
  3. To what degree would $1M of additional, EA-funded fundamental research help longtermists better understand, design, prioritise among, and/or discover intervention options? How does this compare to the degree to which $1M of additional, EA-funded intervention research would help with this?
    • As above, to shed light on this, we can ask:
      1. To what degree have longtermists’ abilities to understand, design, prioritise among, and/or discover intervention options improved in the past?
      2. How much of that was driven by EA-funded fundamental research (as opposed to intervention research or exogenous learning)?
      3. How many resources did that research cost?
  4. Can we construct useful, relatively low-effort Fermi estimates or models relevant to the uncertainties listed in this section?
    • For example, models of the cost-effectiveness of $1M of fundamental research, or of how much fundamental research has affected important beliefs and decisions in the past.
  5. Which types of EA-funded fundamental research would produce the greatest benefits (per unit of resources allocated)?[16]
    • This is beyond the scope of the present post, but would clearly be relevant when actually allocating resources to fundamental research.
    • Additionally, this could be analysed in tandem with the above questions; e.g., when analysing the benefits-per-resource-allocated of EA-funded fundamental research to date, one could also pay attention to how this has varied across types of fundamental research.
    • The ITN framework would likely be useful here. For example, work in moral philosophy and philosophy of mind seems clearly important, but I’m quite unsure how tractable or neglected it is, and think I could learn more about that with some research.[17]
  6. How much of the above analysis holds for other cause areas, such as animal welfare or global health and development?
    • This is also beyond the scope of the present post, but could be an important next step.

  1. Of course, in reality, one could also donate to support implementation rather than research, or carve up the space of possibilities differently. ↩︎

  2. For example, when writing Crucial questions about optimal timing of work and donations. ↩︎

  3. It's also somewhat similar to Rozendal, Shovelain, and Kristoffersson’s (2019) distinction between “strategy research” and “tactics research”. And fundamental research, as I use the term, is somewhat similar to the concepts of “global priorities research”, “cause prioritisation research”, and “macrostrategy research”. ↩︎

  4. See also Rodriguez (2020) on “empirical global priorities research”.

    And see also weeatquince’s (2020) argument that empirical and relatively granular work could be an important part of “cause priorisation research”. ↩︎

  5. This would be analogous to how historians might sometimes read primary sources and build detailed pictures of specific events, even if their aim is a broader understanding of some society or phenomena, because this lets them get “a feel” for it. It would also be analogous to learning about how governments work in general via working in a particular job in government, rather than via reading academic analyses of the topic. ↩︎

  6. See also my collection of crucial questions for longtermists. ↩︎

  7. See also Rozendal et al. (2019) for a similar set of arguments. ↩︎

  8. See also Muehlhauser (2019). ↩︎

  9. Here it’s also worth again noting that fundamental research is a broad category, and that I’ve defined it by its aims, not its object-level methods or focuses. Thus, if one thinks that, for example, abstract macrostrategy research would provide little fundamental insights per dollar whereas constructing rough cost-effectiveness models of many things could, this could just suggest we should fund the latter sort of work as fundamental research. ↩︎

  10. Each of these institutions does/funds both longtermist fundamental research and other things.

    Todd (2020) and Rozendal et al. (2019) also argue that longtermist global priorities research and strategy research, respectively, are neglected. ↩︎

  11. It’s worth noting here that fundamental research can take many forms, including low-effort forms like blog posts which simply make one’s intuitive models of some phenomenon more explicit (see Garfinkel, 2020). ↩︎

  12. Note that these reasons could also each be reasons for prioritising implementation relative to research, giving now rather than giving later, or doing direct work now rather than later (e.g., after building career capital). But those matters are beyond the scope of this post. (For discussion of the latter two matters, see Aird, 2020.) ↩︎

  13. Also relevantly, Todd (2020) writes “I now put a greater credence in patient longtermism compared to the past (due to arguments by Will MacAskill and Phil Trammell and less credence in very short AI timelines), which makes [global priorities research] look more attractive.” ↩︎

  14. Furthermore, those who have a negative perception of EA for seeming overly abstract (or similar things) may really be focusing on research as a whole versus implementation, rather than fundamental versus intervention research. ↩︎

  15. Similarly, due to variants of the arguments given in this post, I'm moderately confident that marginal longtermist research hours are better allocated to fundamental rather than intervention research, all other factors held constant. But there the other factors (e.g., comparative advantage) seem more important for most individual researchers' decisions. ↩︎

  16. This could include considering whether the researchers should be EAs or non-EA researchers funded by longtermist donations. ↩︎

  17. For a prior attempt to tackle this sort of question, see Hill and Sevilla (2019). ↩︎

38

4 comments, sorted by Highlighting new comments since Today at 5:59 AM
New Comment

Loved this post! It's a great reference to relevant discussions, contains many key considerations, and has actionable next steps. I also found the use of expert evaluation very interesting and epistemically modest. 

I suspect you want a mix of both, and fundamental research helps inform what kind of intervention research is useful, but intervention research also helps inform what kind of fundamental research is useful. Given a long-term effect, you can try to find a lever which achieves that effect, or given a big lever that's available for pulling, you can and try to figure out what its long-term effect is likely to be.

Yeah, I'd agree with this. This post is just about what to generally prioritise on the margin, not what should be prioritised completely and indefinitely.

fundamental research helps inform what kind of intervention research is useful, but intervention research also helps inform what kind of fundamental research is useful

That sentence reminded me of a post (which I found useful) on The Values-to-Actions Decision Chain: a lens for improving coordination.

Also, while I agree with that sentence, I do think it seems likely: 

  • that fundamental research will tend to guide our intervention research to a greater extent than intervention research guides our fundamental research
  • that it'd often make sense to gradually move from prioritising fundamental research to prioritising intervention research as a field matures. (Though at every stage, I do think at least some amount of each type of research should be done.)

This also reminds me of the post Personal thoughts on careers in AI policy and strategy, which I perhaps should've cited somewhere in this post.

(Here it's probably worth noting again that I'm classifying research as fundamental or intervention research based on what its primary aim is, not things like how high-level vs granular it is.)

I just read Jacob Steinhardt's Research as a Stochastic Decision Process, found it very interesting, and realised that it seems relevant here as well (in particular in relation to Section 2.1). Some quotes:

In this post I will talk about an approach to research (and other projects that involve high uncertainty) that has substantially improved my productivity. Before implementing this approach, I made little research progress for over a year; afterwards, I completed one project every four months on average. Other changes also contributed, but I expect the ideas here to at least double your productivity if you aren't already employing a similar process.

Below I analyze how to approach a project that has many somewhat independent sources of uncertainty (we can often think of these as multiple "steps" or "parts" that each have some probability of success). Is it best to do these steps from easiest to hardest? From hardest to easiest? From quickest to slowest? We will eventually see that a good principle is to "reduce uncertainty at the fastest possible rate". [...]

Suppose you are embarking on a project with several parts, all of which must succeed for the project to succeed. [Note: This could be a matter of whether the project will "work" or of how valuable its results would be.] For instance, a proof strategy might rely on proving several intermediate results, or an applied project might require achieving high enough speed and accuracy on several components. What is a good strategy for approaching such a project? For me, the most intuitively appealing strategy is something like the following:

(Naive Strategy)
Complete the components in increasing order of difficulty, from easiest to hardest.

This is psychologically tempting: you do what you know how to do first, which can provide a good warm-up to the harder parts of the project. This used to be my default strategy, but often the following happened: I would do all the easy parts, then get to the hard part and encounter a fundamental obstacle that required scrapping the entire plan and coming up with a new one. For instance, I might spend a while wrestling with a certain algorithm to make sure it had the statistical consistency properties I wanted, but then realize that the algorithm was not flexible enough to handle realistic use cases.

The work on the easy parts was mostly wasted--it wasn't that I could replace the hard part with a different hard part; rather, I needed to re-think the entire structure, which included throwing away the "progress" from solving the easy parts. [...]

I expect that, on the current margin in longtermism:

  • fundamental research will tend to reduce uncertainty at a faster rate than intervention research
  • somewhat prioritising fundamental research would result in fewer hours "wasted" on relatively low-value efforts than somewhat prioritising intervention research would

(Though thoes are empirical and contestable claims - rather than being true by definition - and Steinhardt's post wasn't specifically about fundamental vs intervention research.)