The Bittersweetness of Replaceability

byLila4y11th Jul 201523 comments

15


Everybody's replaceable

When I first became interested in effective altruist ideas, I was inspired by the power of one person to make a difference: "the life you can save", as Peter Singer puts it. I planned to save lives by becoming an infectious disease researcher. So the first time I read about replaceability was a gut punch, when I realized that it would be futile for me to pursue a highly competitive biomedical research position, especially given that I was mediocre at wet lab research. In the best case, I would obtain a research position but would merely be replacing other applicants who were roughly as good as me. I became deeply depressed for a time, as I finished a degree that was no longer useful to me. After I graduated, I embarked on the frustrating, counterintuitive challenge of making a difference in a world in which everybody's replaceable.

Replaceability evokes ambivalence: on the one hand, it makes one's efforts to improve the world feel Sisyphean. On the other hand, replaceability is based on a fairly positive perception of the world, in which lots of people are pursuing worthwhile goals fairly well. Lately I've become more optimistic about the world, which makes me more inclined to believe in replaceability. This makes certain EA options appear less promising, but an understanding of incentives can lead to the identification of areas in which one can make an irreplaceable difference.

Donation saturation

Earning to give avoids some of the problems of replaceability, but a major challenge is finding an effective charity that needs donations. As GiveWell discusses, there are few promising charitable causes that are underfunded. GiveWell often mentions lack of room for more funding when it fails to recommend a charity. It's somewhat easier to find funding gaps when considering more esoteric causes, such as existential risks, but as the GiveWell post mentions, even these areas receive considerable funding. A number of organizations work on global catastrophic risks. I've sometimes donated to EA organizations such as the Centre for Effective Altruism, only to feel a twinge of regret when their funding goals are quickly met or even exceeded.  

"Do your job"

Because of the limitations of earning to give, I've considered pursuing a career that would give me control over large amounts of money, particularly scientific funding. But lately I've been having a mental conversation like this: 

me: If I take a job funding scientific research, I can do a lot of good.  I could potentially move tens of millions of dollars to effective research that helps people. 

me: But isn't "funding effective research that helps people" already the goal of NSF and other major funders? 

me: Well I would do it better, because I'm awesome.

me: That's awfully cocky of you. What's something specific you'd change?

me: I would fund more pandemic prevention, for example. 

me: The U.S. government already spends $1 billion a year on biosecurity, as the GiveWell blog post mentioned. For comparison, the budget of NSF is less than $8 billion. The government is also stockpiling vaccines to prevent pandemics. There might be more work to be done, but you're running out of low-hanging fruit.  

Many EA career ideas reduce to "Do your job. Do the shit out of it." This is sound advice, but it's not as radical or inspiring as one might hope. 

Is everyone else an idiot?

If everyone else were completely incompetent, being good at one's job could allow one to make a large difference in the world. The narrative that "everyone but me is an idiot" is popular among open mic comedians, and there are hints of this feeling in much of the rationalist literature. Proponents of this viewpoint often point to problems in scientific research, particularly the lack of reproducibility. While I agree that there are issues to be addressed in the research process, there are some people who are beginning to address them, as the GiveWell blog post discussed. Additionally, a single study doesn't make or break a discipline. My experience in academia (I'm a bioinformatics PhD student) has been quite positive. Most published research in my field seems fairly good, and I think there are several factors that contribute:

 

  • Saturation: There are 1.47 million researchers in the U.S., and they occupy almost every niche imaginable. I recently began a project in which I was trying to develop an expectation-maximization algorithm to filter binding events from background noise in Hi-C data. It was as arcane as it sounds, but as I was in the middle of it, another group published a paper doing exactly what I was trying to do, with an approach far better than mine. Even if you believe that most people aren't very clever, the sheer numbers mean that nearly every good idea is already taken.
  • Feedback: While an individual researcher has incentive to make her own research look good, there's a pleasure to be had in destroying someone else's research. This can be witnessed at the weekly lab meetings I attend, in which adults are nearly brought to tears. Though the system is brutal, it results in thoughtful criticism that greatly improves the quality of research. 
  • System 2 thinking: Daniel Kahneman describes system 2 thinking as slow, analytical thinking. Many cognitive errors and biases result from the snap judgments of system 1 thinking. I'd expect scientists to use system 2 thinking, because they think about problems over long periods of time and have a lot of domain-specific knowledge. 
These factors imply that being good at one's job won't have a large marginal benefit in many fields.

How to be irreplaceable

Finding irreplaceable EA work requires identifying areas of society that have systematic implementation problems. Focusing on cognitive biases may not be the best strategy, given that certain areas of society, such as scientific research and the stock market, seem to have avoided the worst of these biases, even without explicit training. Instead, the Austrian school of economics can offer insight here. Austrian economists define rationality as actions that people take to pursue goals. Goals are not explicit but are revealed through behavior. For example, a commonly cited example of irrationality is that if people are made organ donors by default, most people will be organ donors, but if organ donation is not the default, most people will not be organ donors. An Austrian economist would say that this behavior reflects the fact that most people don't want to research the pros and cons of the boxes they check on forms. Perhaps they value their time too much and have found that the default is usually decent enough. Even if they express different preferences in surveys, their "true" preferences are revealed by their behavior. Thus, most human behavior is rational. Even though this is a tautology, it's useful in understanding human behavior by focusing on incentives and preferences rather than nebulous attempts to pin down the platonic ideal of rationality. 

The following figure is often used by libertarian economists to explain problems with economic incentives. The incentives of the state are represented by the red box. Though governments do some good, they also commit some of the largest and most blatant misappropriations of resources, including policies such as the drug war that are downright harmful. The charitable donations of the average person would be in the top right corner. This characterization seems accurate given that charitable donation is relatively stingy (2% of GDP) but largely ineffective: the most popular charitable causes are education (much of this is wealthy universities catering to wealthy students) and religion, which together account for 45% of charitable giving

 

However, in the strictest sense, scientific research and high-impact charities like the Gates Foundation would fall into the red box as well, and I've been arguing that there's more efficiency in these areas than one might expect. Thus, I'd characterize the matrix as a spectrum, rather than being binary. My lab is not exactly spending our money and we're not exactly spending it on ourselves. But we spend a lot of time thinking about how to spend the limited money we receive, and we have deep domain-specific knowledge of what we're spending it on. Because we associate the money and our research so much with ourselves, we're much closer to the top left corner than the U.S. president is when he creates a budget. 

As I mentioned above, saturation, feedback, and system 2 thinking promote high-quality scientific research. How do charities and governments compare on these dimensions? The market for charities is fairly saturated, and there are several organizations that do high-quality research, engage in system 2 thinking, and provide feedback. On the other hand, charitable donations from ordinary people are not subject to system 2 thinking and appropriate feedback. An analogous situation exists in government policy: a saturated field of think tanks and wonks engages in high-quality analytical thinking and provides feedback on policy proposals. However, the world of governments is not saturated, and individual governments face little competition. The feedback that governments receive is often perverse: citizens want lower taxes and higher government spending. System 2 thinking is notoriously lacking in public policy. 

Conclusions

Replaceability is a problem in almost every aspect of EA (though some EAs may see it as a weight off their shoulders). I feel slightly more favorably toward earning to give than I did previously, but I'm concerned about the lack of good giving opportunities. Direct EA work seems to be a good option, but there should be much more advocacy than research. EA organizations are probably reinventing the wheel through their research. Nitpickers will point out that advocacy itself is saturated and replaceable - this is probably true to an extent. Instead of advocating for specific policies, it may be better to focus on creating systems with favorable values and incentives. Compared to academic discussion of fallacies, an understanding of incentives can provide more insight into why systems fail. EA could benefit from the perspectives of libertarian economists.