Hide table of contents

The neglectedness criterion is commonly used within effective altruism as a way of determining whether a cause area is high impact or not. As The Open Philanthropy Project puts it, “all else being equal, we prefer causes that receive less attention from others”. And why should we have that kind of preference? The given rationale is that most opportunities are subject to diminishing returns, where the impact of additional resources declines with the total number of resources already being put toward that opportunity. However, as I will attempt to show in this post, diminishing returns is not sufficient to guarantee a positive relationship between neglectedness and the marginal impact of more funding. Furthermore, diminishing returns may not be as common as is usually assumed. Overall, the results cause concern that the neglectedness heuristic is not useful in many contexts.

The words “neglectedness” and “crowdedness” get used a lot in effective altruism, sometimes to refer to very different ideas. In this post, I am using crowdedness to mean the amount of resources being put toward a cause (or project, opportunity, intervention, etc.), and I take neglectedness to mean the opposite of crowdedness. This is how the terms are defined in 80,000 Hours’ framework. By contrast, I’ve occasionally heard neglectedness be used to mean “less resources are going toward this cause than they should” or “the results of this intervention are valued less by other funders than they are to me”. Those are not the meanings that I have in mind in this post.

The post is organized in the following way: first is a qualitative discussion of the use of the neglectedness heuristic in situations with diminishing returns, then a short discussion of the possibility of increasing returns, then a section on how these ideas relate to the 80,000 hours cause prioritization framework. At the end I present the simple mathematical model that I used to generate my conclusions (this can be skipped by most readers).

Key Points:

· Even under diminishing returns, neglectedness is expected to be a weak proxy for marginal impact in cases where other actors have similar values to us, are well-informed, and are somewhat rational (this is probably clear to most people, but it provides a good starting point).

· Cause selection based on other factors that influence neglectedness can be more efficient than selection based on neglectedness itself. One such factor is the importance of an opportunity according to your value system relative to the value systems of others.

· Relative to other contexts (such as firms operating in market environments), there is less reason to expect diminishing returns for altruistic opportunities; the possibility of increasing returns further calls into doubt the general usefulness of the neglectedness heuristic.

When is neglectedness correlated with marginal impact?

The neglectedness heuristic depends on the idea that cause areas with less resources invested in them will have a larger marginal impact (adding a small amount of additional resources will make a larger difference). At first glance, this may seem to be implied by diminishing marginal returns. And in fact, if the amount of resources going toward each cause is randomly determined, then diminishing returns does guarantee that neglectedness correlates strongly with marginal impact.

However, in most circumstances, we wouldn’t expect other funders to be acting in a completely random fashion. Yes, I realize that effective altruism is motivated by the fact that people frequently attempt to do good in sub-optimal ways, but that doesn’t mean that people aren’t at least somewhat thoughtful when making decisions on how to do good (see GiveWell’s article: https://blog.givewell.org/2013/05/02/broad-market-efficiency/).

To see why this matters, let’s consider an extreme case: you are in a situation where all other donors share your values and are acting to maximize their marginal impact. This might be close to the decision of choosing between causes that are highly regarded within EA, or of choosing between GiveWell’s top charities. In these cases, would we expect to see a correlation between neglectedness and marginal impact? Probably not. The other funders are fully aware of diminishing returns, and they will take that into account when making decisions. If one cause had a higher marginal impact than another, we’d expect funders to move money away from the latter and toward the former. Thus, in equilibrium, we’d expect for all causes that receive positive amounts of funding to have the same marginal impact, and all other causes to receive zero funding and have marginal impacts lower than the equilibrium level. This means that the neglectedness heuristic is useless in such a world. Among causes that receive a positive amount of funding, there is no correlation between neglectedness and marginal impact. If we include causes that receive zero funding, the correlation actually becomes negative (the zero-funding causes are the most neglected precisely because they have a low marginal impact).

Of course, the situation above was an extreme case. In very few cases can we expect other funders to be perfectly rational and to perfectly share our values. But in many cases we can expect them to be at least partially rational and to partially share our values. In these situations, causes with more funding would tend to be more important or tractable, which would weaken the link between neglectedness and marginal impact. Specifically, the expected marginal impact of a cause would increase with neglectedness at a rate lower than the rate of diminishing returns. The more that other funders share our values, the weaker the relationship will become. In the extreme case, already covered, there is no correlation.

Now, one may try to defend the neglectedness heuristic by appealing to the fact that it is meant to be used in conjunction with other factors. Rather than looking at the general link between existing funding and marginal impact between any two causes, we should look at the link between existing funding and marginal impact between any two causes that are of similar tractability and importance. Sure, there may some selection going on where causes with more funding tend to be more important, but if we control for importance and other factors, we would expect the link between neglectedness and marginal impact to be just as strong as the rate of diminishing returns.

Such an argument works well in cases where you are sure that other funders are equally or less informed than you are. However, in many cases, there’s at least a possibility that others have some information that we don’t. So, if we have two causes that we expect are equally tractable and important, but for some reason cause A is receiving more funding than cause B, can we be completely sure that they are actually equally important and tractable? It could be the case that cause A is receiving more funding because of differences in other funders’ values relative to your own or due to irrationalities in others’ behavior. But it also could be that the other funders know something about the importance or tractability of the causes that you don’t. This will tend to weaken the link between neglectedness and marginal impact, even after controlling for the perceived importance and tractability of separate causes.

Selection Based on Alternative Factors

Even in cases where there is a positive relationship between neglectedness and marginal impact, other heuristics may be more useful than neglectedness. Ideally, we would like to not simply select causes that are neglected, but to select causes that are neglected for reasons other than their impact. This is possible in cases where we can predict some of the factors that cause variation in funding. For example, we may be able to predict which causes are likely to be underweighted by others’ value systems relative to ours (this is the idea behind Paul Christiano’s article). This predicted “value conflict” measure will be more positively correlated with marginal impact than neglectedness because it won’t correlate with the “additional information” factor. Furthermore, after controlling for the value conflict measure, the link between neglectedness and marginal impact will be weakened even more. The value conflict heuristic thus can serve as an upgrade to the neglectedness heuristic.

The Possibility of Increasing Returns

The analysis so far has depended on the assumption that all causes are subject to diminishing returns, which seems to be the default assumption in most EA cause-prioritization work that I have seen. Now I’d like to present a brief argument that calls into question the reliability of that assumption.

In intro economics courses, I was taught that although increasing returns to scale are common, in any competitive market equilibrium we can expect to see firms operating at a point where diminishing or constant returns exist. Why should that be the case? Because if one firm faces increasing returns, they can simply expand (which lowers their marginal costs), charge a lower price than their competitors, and capture more of the market. Thus, if there are multiple firms existing in equilibrium, we can expect that the firms have expanded to a point where they are subject to diminishing (or constant) returns to scale. This gives a clear theoretical reason for assuming diminishing returns to scale in competitive markets. (still, natural monopolies exist, where the point of diminishing returns never comes and the only possible equilibrium is with one firm facing increasing returns).

Does the same mechanism exist to motivate the assumption that charities (or cause-areas, research areas, etc.) will always be operating at an output level where they face diminishing returns? In some cases, maybe. For example, if the against malaria foundation were facing increasing returns, it could borrow money to expand, advertise that it can now do good for a much lower cost, attract donations away from competing charities, and use those donations to pay off the expansion. This scenario seems plausible because many AMF donors are rational and strategic in their donation decisions, and so they will predictably increase donations if the charity has increased cost-effectiveness. In other cases, however, it seems less likely that donors will be so strategic. Let’s say that a certain type of climate change mitigation research is facing increasing returns to scale. Are we confident that research institutes in this area can convince enough donors to finance an expansion? It’s very difficult to find out in the first place whether increasing returns exist, and it would be even more difficult to convince donors, many of whom do not make philanthropic decisions in the most rational way, to increase their donations in response to this.

I’d also note that constant returns to scale is a common assumption to make within economics, particularly at the industry level, which may be comparable to the cause-level for altruistic opportunities.

These considerations make me skeptical of assuming diminishing returns for altruistic opportunities. I’d like to hear if there’s been any relevant work done on this topic (either within EA organizations or within general academia). Increasing returns is a fairly common topic within economics, so I figure there is plenty of relevant research out there on this.

Relation to 80,000 Hours Cause Prioritization Framework

The commentary above leads us to question the usefulness of the neglectedness heuristic. Now I’d like to examine how this consideration would impact cause-prioritization when viewed through 80,000 Hours’ problem framework, presented here.

The 80,000 approach breaks marginal impact into three ratios corresponding to scale, solvability, and neglectedness:

This decomposition will always work as long as you can correctly estimate the values for scale, solvability, and neglectedness (whether it is the most useful way to break down the problem is another story, but it will certainly work if the numbers can be estimated correctly). Now, looking a bit closer at the neglectedness score, this can be rewritten as:

So our value for neglectedness is determined completely by how many resources are going toward the cause or problem. This means that it is not affected by any judgements about the link between neglectedness and marginal impact. Instead, such judgements will have to impact our estimates of solvability. If there is a sufficiently strong correlation between neglectedness and marginal impact, then we would expect there to be zero correlation between neglectedness and solvability. (Why zero? Because solvability is defined in terms of percentage increase in resources. If all problems yield the same amount of good when funding is doubled, then problems with less funding will have a larger marginal impact because it is easier to double their funding). If there is no correlation between neglectedness and marginal impact, then we would expect for there to be a negative correlation between neglectedness and solvability.

If we look at the scores that 80,000 hours gives to various problems, we see that measures of solvability are nearly constant. This leads to nearly no correlation between neglectedness and solvability, which leads to a large positive correlation between neglectedness and marginal impact (in the article "total score" = marginal impact). While I’m not sure how the solvability scores were determined, I would expect that the idea that there’s a strong link between neglectedness and marginal impact due to diminishing returns played a role, as it is mentioned multiple times in the general cause prioritization article.

If my argument presented above is correct, and there’s reason to doubt the link between neglectedness and marginal impact, then we may want to use a prior that neglectedness and solvability are negatively correlated. Furthermore, we may want to use other heuristics, such as the value conflict measure mentioned earlier, to better estimate solvability. Finally, if we allow for increasing returns to scale, we would want to adjust our priors on the correlation between neglectedness and solvability to be even more negative (not to mention the fact that prioritization based on marginal impact may not be the best option when one faces increasing returns).

Model

To illustrate the ideas in this post, I’ll use a simple model of cause selection. While the functional forms are chosen to make things nice and linear, I expect that the conclusions would still hold if more complex functional forms were used. There may be errors in the reasoning or algebra below; if you notice anything please let me know.

In this situation you are attempting to choose to invest a small amount of resources. There a number of causes open to you, indexed by i, which each have some fixed amount of resources r_i invested in them by other actors regardless of what decision you make. Each cause has a positive impact on the world (relative to your value system), which is a function of r_i:

where is the perceived importance and tractability of the cause, and is the unknown (to you) importance and tractability of the cause.

Because you only have a small amount of resources available, you are interested in the marginal impact of giving to each cause, which is the derivative of the previous equation with respect to r_i:

where neglectedness . We're interested in predicting the marginal impact for a cause area based on the known values and , so we'll look at the expected value of marginal impact conditional on and :

First, let's imagine that the amount of funding for each cause is independent of (this could happen if others randomly choose what causes to fund, but it could also happen if they rationally choose but have values that are uncorrelated with ours). In this case, the conditional expectation becomes:

where I assume that (if we are doing a good job of estimating importance and tractability, then this should be true). As a measure of the usefulness of the neglectedness heuristic, we can look at the slope of the conditional expectation curve:

This result shows the motivation behind the neglectedness heuristic. When other’s decisions on what to fund are uncorrelated with the impact of the opportunity, we would expect there to be a positive relationship between neglectedness and marginal impact that is equal to a cause area's importance and tractability.

However, in many cases we cannot expect for neglectedness to be completely independent of , and instead would expect for there to be a negative correlation ( ). Here we can write:

which will usually be lower than because the third term is negative. Using the previous assumption that , we find that the average slope of the conditional expectation curve is less than :

This means that the usefulness of the neglectedness heuristic is weakened when neglectedness is correlated with unobserved aspects of importance or tractability, which is to be expected when others have at least some information that we don't and share some of our values. In the extreme case, where others are rational, equally informed, and share our values, the slope of the conditional expectation will be zero.

Next I’d like to include the decision-making of others into the model. Specifically, we’ll assume that there is some observable aspect of causes which influences how much people invest in them but has nothing to do with their marginal impact. As an example, I’ll use , which is the difference in value that others place on cause i relative to your values. To model this, in addition to our marginal impact equation, we now need an equation for how the number of resources going toward a cause is determined:

where is the conditional expectation function of neglectedness given value conflict and are other determinants of outside funding that are uncorrelated with . Plugging this into the marginal impact equation and taking the derivative gives:

where , , by the assumption that is uncorrelated with marginal impact and by the definition of as factors that are uncorrelated with .

The result in the previous equation has a nice interpretation. is the rate at which a value conflict causes an increase (or decrease) in neglectedness, while is the rate at which this neglectedness reduces marginal impact. Furthermore, we could potentially estimate the function from the data (we can estimate the level of neglectedness and how much value others place on each cause relative to us). Using this function, we can calculate a predicted level of neglectedness for each cause. We then find:

which is the same slope that we got in the case where funding was randomly allocated across causes. This is what I meant when I said that selecting on other determinants of neglectedness (like value conflict) can be more useful than simply selecting based on neglectedness.

Comments17
Sorted by Click to highlight new comments since: Today at 10:12 AM

" I’d like to hear if there’s been any relevant work done on this topic (either within EA organizations or within general academia). Increasing returns is a fairly common topic within economics, so I figure there is plenty of relevant research out there on this. "

These are my key reasons (with links to academic EA and other discussions) for seeing diminishing returns as the relevant situation on average for EA as a whole, and in particular the most effective causes:

  • If problems can be solved, and vary in difficulty over multiple orders of magnitude (in required inputs), you will tend to see diminishing returns as you plot the number of problems solved with increasing resources; see this series of posts by Owen Cotton-Barratt and others
  • Empirically, we do see systematic diminishing returns to R&D inputs across fields of scientific and technological innovation, and for global total factor productivity; but historically the greatest successes of philanthropy, reductions in poverty, and increased prosperity have stemmed from innovation, and many EA priorities involve research and development
  • In politics and public policy the literatures on lobbying and campaign finance suggest diminishing returns
  • In growing new movements, there is an element of compounding returns, as new participants carry forward work (including further growth), and so influencing; this topic has been the subject of a fair amount of EA attention
  • When there are varied possible expenditures with widely varying cost-effectiveness and some limits on room for more funding (eventually, there may be increasing returns before that), then working one's way from the most effective options to the least produces a diminishing returns curve at a scale large enough to encompass multiple interventions; Toby Ord discusses the landscape of global health interventions having this property
  • Elaborating on the idea of limits to funding and scaling: an extremely cost-effective intervention with linear or increasing returns that scaled to very large expenditures would often imply impossibly large effects; there can be cheap opportunities to save a human life today for $100 under special circumstances, but there can't be trillions of dollars worth of such opportunities, since you would be saving more than the world population; likewise the probability of premature extinction cannot fall below 0, etc
  • So far EA is still small and unusual relative to the world, and much of its activity is harvesting low-hanging fruit from areas with diminishing returns (a consequence of those fruit) that couldn't be scaled to extremes (this is least true for linear aid interventions added to the already large global aid and local programs, and in particular GiveDirectly, but holds for what I would consider more promising, in cost-effectiveness, EA global health interventions such as gene drive R&D for malaria eradication); as EA activity expands more currently underfunded areas will see returns diminish to the point of falling behind interventions with more linear or increasing returns but worse current cost-effectiveness
  • Experience with successes using neglectedness (which in prioritization practice does involve looking at the reasons for neglect) thus far, at least on dimensions for which feedback has yet arrived

" Ideally, we would like to not simply select causes that are neglected, but to select causes that are neglected for reasons other than their impact. "

Agreed.

Thanks for this comment! The links were helpful. I have a few comments on your points:

" Empirically, we do see systematic diminishing returns to R&D inputs across fields of scientific and technological innovation "

After reading the introduction of that article you linked, I'm not sure that it has found evidence of diminishing returns to research, or at least that it has found the kind of diminishing returns that we would care about. They find that the number of researchers required to double GDP (or any other measure of output) has increased over time, but that doesn't mean that the number of researchers required to increase GDP by a fixed amount has increased. In fact, if you take their Moore's law example, we find that the number of transistors added to a computer chip per researcher per year is 58000 larger than it was in the early 70s (it takes 18 times more researchers to double the number of transistors, but that number of transistors is about a million times larger than it was in the 70s). When it comes to research on how to do the most good, I think we would care about research output in levels, rather than in percentage terms (so, I only care how many lives a health intervention would save at time t, rather than how many lives it will save as a percentage of the total amount of lives at time t).

" In politics and public policy the literatures on lobbying and campaign finance suggest diminishing returns "

I'm struggling to see how those articles you linked are finding diminishing returns. Is there something I'm missing? The lobbying article says that the effectiveness of lobbying is larger when an issue does not receive much public attention, but that doesn't mean that, for the same issue, the effectiveness of lobbying spending will drop with each dollar spent. Similarly, the campaign finance article mentions studies that find no causal connection between ad-spending and winning an election for general elections and others which show a causal connection for primary and local elections. I don't see how this means that my second dollar donated to a campaign will have less expected value than my first dollar.

As antonin_broi mentioned in another comment, political causes seem to have increasing returns built in to them. You need a majority to get a law passed or to get someone elected, so under complete certainty there would be zero (marginal) value to convincing people to vote your way until you reach the median voter. After that there will once again be zero marginal value to buying additional votes.

" In growing new movements, there is an element of compounding returns, as new participants carry forward work (including further growth), and so influencing; this topic has been the subject of a fair amount of EA attention "

I agree that this is important for growing new movements, and I have seen EA articles discuss a sort of "multiplier effect" (if you convince one person to join a group they will then convince other people). But none of the articles I have seen, including the one that you linked, have mentioned the possibility of increasing returns to scale. Increasing returns would arise if the cost of convincing an additional person to join decreases with the number of people that are already involved. This could arise because of changing social norms or due to increased name recognition.

" historically the greatest successes of philanthropy, reductions in poverty, and increased prosperity have stemmed from innovation, and many EA priorities involve research and development "

This brings up one potentially important point: in addition to scaling effects that you mentioned, another common source of increasing returns is high research and development requirements. High R&D requirements mean that the first units of output are very expensive (because in addition to the costs of production you also have to learn how to produce them) compared with following units. To apply this to an EA topic, if Givewell didn't exist, then to do a unit of good in global health we would either have to fund less cost-effective charities (because we wouldn't know which one was best) or pay money to create Givewell before donating to its highest recommended charities. In the second scenario, the cost of producing a unit of good within global health is very high for the first unit and significantly lower for the second. The fact that innovation seems to be one of the more effective forms of philanthropy increases the possibility that we are in a world where increasing returns to scale are relevant to doing good. However, I'm not completely sure on my reasoning here. I may be missing something.

" Experience with successes using neglectedness (which in prioritization practice does involve looking at the reasons for neglect) thus far, at least on dimensions for which feedback has yet arrived "

I think this would be a very important piece of evidence. Can you give me some detail about the successes so far?

Thanks for this very interesting post !

I've been thinking a bit about examples of causes and interventions with increasing returns (I'm actually working on a philosophy paper that touches on this issue), and it seems to me that many examples could be found in causes and interventions that involve social norms and politics.

For example, suppose you are putting resources in a campaign to encourage members of Parliament to vote in favor a certain law, which would have a great impact if passed. There may be increasing returns of campaigning at the point where the campaign succeeds in convincing the majority of members of Parliament to vote for the law. This is because there is a threshold: if you do not spend enough resources in the campaign to convince half of the members of Parliament, the law is not passed and your impact is very low; but as soon as you reach the threshold of resources necessary to convince the majority, the law gets passed and you have a very high impact.

The same happens with social norms. Some social norms correspond to equilibria which are hard to modify, so a critical mass of efforts could be necessary to shake them, and then it becomes easy to shift them towards other equilibria. For example, if you want to spread the moral norm of antispeciesism, there might be a critical mass of antispeciesists necessary to make antispeciesism mainstream, and speciesism blameworthy in society. After the critical mass is reached it might become much easier to make progress.

Thanks for the response. I agree that social norms and politics are areas where increasing returns seem likely.

Even before I looked over your model, I agreed with you that we should be wary about assuming higher marginal impact from donations to more "neglected" causes. There's a lot of noise in the EA funding landscape, partly because organizations' values differ and mostly because noise is inevitable in a context with few funders giving to a large number of organizations, many of which are quite new and/or have little funding from non-EA orgs.

That said, I like the concept of neglectedness for a few reasons you didn't mention, at least not in so many words:

  • When an organization doesn't have much funding, extra funding may not have unusually high marginal impact, but it will often provide unusually high marginal information:
    • If I donate to GiveWell so that they can run an up-to-date literature review on a topic they've studied for years, I'll learn a little bit more about that topic.
    • If I give the same funding to a researcher studying some highly unusual topic (e.g. alternative food sources as a strategy to counter global famine), I might learn a lot about something I knew nothing about.
    • Of course, it may be that a well-funded research team also produces better research than a team with no track record, but if we focus on just funding research or work into a cause we don't know much about, it seems likely that we'll proportionally increase our knowledge of that cause more than we would of a better-studied cause. (We can then adjust our beliefs about impact and tractability, so that we no longer need to rely on neglectedness as a tiebreaker.)
  • Given the number of causes/projects in EA, and the limited number of funders, many ideas haven't been very well-studied. So if I'm worried about an opportunity being weak because it's not well-funded, even I've checked carefully and it seems high-impact to me, I should consider that I might be one of the world's best-informed people on that opportunity.
    • In financial markets, something being low-value signals that many experts have examined that thing and estimated its true value to be low. That's why it's hard to make money buying cheap stocks. But in charitable markets, especially the tiny "market" of EA, something being ill-funded could be a sign that almost no one has examined it.
    • This isn't always the case, of course, and high funding could also be taken as a signal of quality, but it does seem good to remember that "neglectedness" sometimes means "newness" or "no-one-has-looked-at-this-yet-ness".

I don't disagree with GiveWell that good opportunities are very likely to find good funders, but I've seen enough counterexamples over the last few years that I'm aware of how many gaps remain in the nonprofit funding space.

I think there's reason to be cautious with the "highest marginal information comes from studying neglected interventions" line of reasoning, because of the danger of studies not replicating. If we only ever test new ideas, and then suggest funding the new ideas which appear from their first study to have the highest marginal impact, it's very easy to end up with several false positives being funded even if they don't work particularly well.

In fact, in some sense the opposite argument could be made; it is possible that the highest marginal information gain will come from research research into a topic which is already receiving lots of funding. Mass deworming is the first example that springs to mind, mostly because there's such a lack of clarity at the moment, but the marginal impact of finding new evidence about an intervention there's lots of money in could still be very large.

I guess the rather sad thing is that the biggest impact comes from bad news: if an intervention is currently receiving lots of funding because the research picture looks positive, and a large study fails to replicate, a promising intervention now looks less so. If funding moves towards more promising causes as a result, this is a big positive impact, but it feels like a loss. It certainly feels less like good news than a promising initial study on a new cause area, but I'm not sure it actually results in a smaller impact.

I agree that non-replication is a danger. But I don't think that positive results are specifically high-value; instead, I care about studies that tell me whether or not an intervention is worth funding.

I'd expect most studies of neglected interventions to turn up "negative" results, in the sense that those interventions will seem less valuable than the best ones we already know about. But it still seems good to conduct these studies, for a couple of reasons:

  • If an intervention does look high-value, there's a chance we've stumbled onto a way to substantially improve our impact (pending replication studies).
  • Studying an intervention that is different from those currently recommended may help us gain new kinds of knowledge we didn't have from studying other interventions.
    • For example, if the community is already well-versed in public health research, we might learn less from research on deworming than research on Chinese tobacco taxation policy (which could teach about generally useful topics like Chinese tax policy, the Chinese legislative process, and how Chinese corporate lobbying works).

Of course, doing high-quality China research might be more difficult and expensive, which cuts against it (and other research into "neglected" areas), but I do like the idea of EA having access to strong data on many different important areas.

That said, you do make good points, and I continue to think that neglectedness is less important as a consideration than scale or tractability.

Related: I really like GiveWell's habit of telling us not only which charities they like, but which ones they looked at and deprioritized. This is helpful for understanding the extent to which the "deliberate neglect by rational funders" model pans out for a given intervention/charity, and I wish it were done by more grantmaking organizations.

Thanks for the comment. I agree that considering the marginal value of information is important. This may be another source of diminishing marginal total value (where total value = direct impact + value of information). It seems, though, that this is also subject to the same criticism I outline in the post. If other funders also know that neglected causes give more valuable information at the margin, then the link between neglectedness and marginal value will be weakened. The important step, then, is to determine whether other funders are considering the value of information when making decisions. This may vary by context.

Also, could you give me some more justification for why we would expect the value of information to be higher for neglected causes? That doesn't seem obvious to me. I realize that you might learn more by trying new things, but it seems that what you learn would be more valuable if there were a lot of other funders that could act on the new information (so the information would be more valuable in crowded cause areas like climate change).

On your second point, I agree that when you're deciding between causes and you're confident that other funders of these causes have no significant information that you don't, and you're confident that there are diminishing returns, then we would expect for neglectedness to be a good signal of marginal impact. Maybe this is a common situation to be in for EA-type causes, but I'm not so sure. A lot of the causes on 80,000 Hours' page are fairly mainstream (climate change, global development, nuclear security), so a lot of other smart people have thought about them. Alternatively, in cases where we can be confident that other funders are poorly informed or irrational, there's the worry about increasing returns to scale.

I think the argument is that additional information showing that a cause has high marginal impact might divert causes away towards it from causes with less marginal impact. And getting this kind of information does seem more likely for causes without a track record allowing for a somewhat robust estimation of their (marginal) impact.

This is essentially what I was thinking. If we're to discover that the "best" intervention is something that we aren't funding much now, we'll need to look closer at interventions which are currently neglected.

I agree with the author that neglectedness isn't a perfect measure, since others may already have examined them and been unimpressed, but I don't know how often that "previous examination" actually happens (probably not too often, given the low number of organizations within EA that conduct in-depth research on causes). I'd still think that many neglected causes have received very little serious attention, especially attention toward the most up-to-date research (maybe GiveWell said no five years ago, but five years is a lot of time for new evidence to emerge).

(As I mentioned in another comment, I wish we knew more about which interventions EA orgs had considered but decided not to fund; that knowledge is the easiest way I can think of to figure out whether or not an idea really is "neglected".)

Thank you for this valuable contribution to the community. I have struggled with the concept of neglectedness - and it predictive value for marginal returns - since it was first introduced to me. I don't buy it and your post gives a first good counter argument.

One aspect that seems to be overlooked is the effect of R&D on marginal returns. If more funds have been spent on R&D, funds spent on implementation can be allocated more efficiently. So for many cause area's you may actually see increasing marginal returns (on dollars spent on implementation) as the area matures.

For example, the amount of dollars required to save one QALY within the cause area "HIV" by providing medication today is substantially less than 10 years ago, and this amount will continue to decrease. In other words, the function of "additional QALY saved from HIV by investing one dollar of medication at time t" could be decreasing in the "amount of funds invested in implementation up to time t" (diminishing returns), but it is also increasing in the "amount of funds invested in R&D up to time t". This function is not necessarily decreasing (or increasing) in the total amount of funds invested in HIV (implementation and R&D). To make things more complex, consider the impact of the last dollar spent on HIV medication that stops the last HIV infected person from spreading and thereby effectively eradicates HIV from the earth...

[this is just an example. I do not advocate to invest in the cause area HIV]

Long story short: it all depends on the cause area and its current state. I would strongly advocate to drop the heuristic of neglectedness. It does more harm than good, and leads to a scattering of funds instead of global coordinated action on the earths most urgent issues.

Martijn

Thanks for the comment. I agree that R&D costs are very important and can lead to increasing marginal returns. The HIV example is a good one, I think.

I think that neglectedness is useful for initial scoping. But I think then it makes sense to move to explicit cost-effectiveness modeling like this to address your concerns.

I agree that moving to explicit cost-effectiveness modeling is ideal in many situations. However, the arguments that I gave in the post also apply to the use of neglectedness for initial scoping. If neglectedness is a poor predictor of marginal impact, then it will not be useful for initial scoping.

Comment moved.

For clarification: (PITi+ui) is the "real" tractability and importance?

The text seems to make more sense that way, but reading "ui is the unknown (to you) importance and tractability of the cause.", I interpreted it as ui being the "real" tractability and importance instead of just a noise term at first.

Yes, PITi + ui is supposed to be the real importance and tractability. If we knew PITi + ui, then we would know a cause area's marginal impact exactly. But instead we only know PITi.

Curated and popular this week
Relevant opportunities