Evidence and reasoning

by Aaron Gertler6 min read27th May 2020No comments

9

EA Handbook
Personal Blog

Sometimes, a good idea turns out not to be such a good idea after all — for example, the PlayPump

Sometimes, an idea seems strange or extreme to some people but morally necessary to others — for example, the abolition of slavery in various parts of the world and at many different points throughout history.

Sometimes, an idea seems almost impossible to evaluate — for example, whether insects feel pain, whether we should care if they do, and whether the topic is worth researching at all.

The world is complex, which makes it very difficult to figure out how to do good over the long term.

Evidence and reasoning

There are many different ways that we can learn about how the world works. Two broad categories stand out: 

Evidence: We can discover new things by observing and experimenting. If we wanted to know how well mass deworming works, we could select all the villages in a certain region, then randomly run deworming campaigns in some but not others. By looking at the rate of worm infection in both groups of villages, we’d learn something about the impact of mass deworming.

Reasoning: We’ve never landed humans on another planet, but we might be able to do so in the future. There are many things about interplanetary colonization that we can't learn from direct evidence (we can't ask random people to live on Mars to see how it affects their health).

However, we can use our knowledge of science and technology to estimate relevant factors (e.g. the cost of long-distance space travel). In turn, these estimates can help us get a sense for how many people might exist in future centuries — ten billion on Earth? A trillion throughout the galaxy? — and how valuable it would be to focus on work that might improve their lives. 


To figure things out, we normally have to use a combination of evidence and reasoning. For example, when we’re trying to figure out whether cows are sentient, we can draw on evidence about the structure of their brains — but we also need to use reason to go beyond that evidence (e.g. by drawing analogies between the structure of human brains and cow brains, and how neural differences might influence differences in a being's internal experience).

Answering questions when strong evidence is available

Sometimes, we can learn a lot about a question using evidence that is already available or straightforward to collect. In those cases, effective altruism generally relies on the same research methods you’d see in academia. 

Examples of EA research include:

  • David Roodman’s review of research on whether reducing incarceration increases crime
  • Saulius Šimčikas’ estimates of how many animals are currently being farmed or kept in captivity
  • Luisa Rodriguez’s analyses of the potential impacts of nuclear war

Naturally, EA also draws heavily on research done outside of the movement; we are far from the only people who care about reducing poverty, preventing disease, helping animals, or many other ways of improving the world.

Reasoning through questions in the absence of strong evidence

Some elements of the world are more difficult to evaluate through evidence. They might involve:

  • Questions about how to evaluate human well-being. For example, to what extent should we measure how well someone’s life is going by looking at external features of their life (like income), versus by asking them how they feel about their life?
  • The internal experience of beings we don’t understand well — especially animals, like fish or insects, that are very different from humans.
  • Uncertain predictions about the future. For example, how can we work to prevent harm from technology that could be dangerous if it existed, but hasn’t been developed yet?

One approach would be to ignore issues that lean heavily on factors like these, so that we can focus on work that we've proven is cost-effective at improving lives.

However, we need to account for scale: how big is the problem we’re trying to solve? If something could be massively important, it may be worth trying to understand it better. And if we were to avoid exploring such options, we might miss out on the best opportunities. 

Below, we’ve shared two examples of EA taking on difficult questions whose answers could be hugely impactful.

Example: Insect suffering

Kelsey Piper suggests that we should use the following approach when considering “strange” ideas:

I think the principle I want us to abide by is something like ‘if something is an argument for caring more about entities who are widely regarded as not worthy of such care, then even if it’s a pretty absurd one, I am supportive of some people doing research into it, and if they’re doing that research with the intent of increasing everyone’s wellbeing and flourishing as much as possible, then they’re part of our movement’.

An example of Kelsey’s suggestion in action: People who advocate for making pesticides more humane, so that they cause insects to suffer less before they die. 

To some, this may seem absurd. Is insect suffering really a major problem? Are insects even close to being sentient?

And yet, this is a serious field of inquiry within EA, because even if it seems unlikely to most people that insects suffer in a way we should care about:

  1. There are a lot of insects, so their suffering could be very large in scale.
  2. We are very uncertain about how best to define "sentience," which makes it hard to prove that insects don't suffer. (For example, see this report on invertebrate sentience.)

Example: Predicting AI development

The development of more powerful AI could have massive implications for human civilization, so it’s useful to be able to anticipate when certain AI capabilities will be developed. This is the goal of AI Impacts, which aims to “answer decision-relevant questions about the future of artificial intelligence.”

It’s hard to make precise estimations about future technological progress, but AI Impacts approaches the problem from many different angles: surveying experts about what they think will happen, charting improvements in computing speed, and even looking back at history to see how other technologies developed.

That last point may seem irrelevant to AI development. But if we discover factors that have led to rapid progress across dissimilar technologies, we might learn something useful by considering how those factors could apply to artificial intelligence. It may be worth trying a range of approaches — however indirect — if this helps us make even minor progress on such an important question.

Perspectives on evidence and reasoning

Effective Altruism is a Question, not an Ideology (Helen Toner, 2014)

It’s really unusual for someone who supports a movement to actively want to change their mind. But that’s the position that every aspiring effective altruist is in.

Anyone who can help us answer the question we care most about is a valuable ally. We can and should tell anyone who disagrees with our object-level beliefs that we really, truly want to be persuaded to think otherwise.

The wrong donation can accomplish nothing (GiveWell, 2016)

Conventionally, most people expect that charities are probably accomplishing good unless there's proof that money is being misappropriated. We disagree. We think that charities can easily fail to have impact, even when they're doing exactly what they say they are.

[...]

It's not that surprising. We think that many of the problems charities aim to address are extremely difficult problems that foundations, governments, and experts have struggled with for decades. Many well-funded, well-executed, logical programs haven't had the desired results. Given the lack of a successful track record of solving such complex problems, any charity claiming to have "the answer" bears the burden of proof to demonstrate that their programs are working. Most charities can't provide this type of evidence. Collecting evidence is expensive, and we've found that even many excellent charities don't do this. 

Furthermore, many giving donations are motivated by personal connections: a friend asks you to support a cause, or you know someone who suffered from a disease that the charity fights. As a result, charities raise money based on their ability to market themselves and fundraise, as opposed to their ability to change lives. Because charities aren't being held accountable based on impact, there are probably a lot of charities that continue to raise and spend money but don't make any difference at all.

Does that mean that a given charity's programs don't work? Not necessarily. But, it does mean that it's important to look beyond marketing claims and stories when deciding where to make a donation.

Crucial Considerations and Wise Philanthropy (Nick Bostrom, 2014)

Suppose you’re out in the forest and you have a map and a compass, and you’re trying to find some destination. You’re carrying some weight — maybe you have a lot of water because you need to hydrate yourself to reach your goal and carry weight — and trying to fine-tune the exact direction you’re going. You’re trying to figure out how much water you can pour out to lighten your load without having too little to reach your destination.

All of these are normal considerations: You’re fine-tuning the way you’re going to make more rapid progress towards your goal. But then you look more closely at this compass that you have been using, and you realize that the magnet part has actually come loose. This means that the needle might now be pointing in a completely different direction that bears no relation to North.

With this discovery, you now completely lose confidence in all the earlier reasoning that was based on trying to get the more accurate reading of where the needle was pointing. This would be an example of a crucial consideration in the context of orienteering.

The idea is that there could be similar types of consideration, in more important contexts, that throw us off completely. So a crucial consideration is a consideration such that if it were taken into account, it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors, but a major change of direction or priority.

Cognitive Biases Potentially Affecting Judgment of Existential Risks (Eliezer Yudkowsky, 2008)

If someone proposes a physics disaster, then the committee convened to analyze the problem must obviously include physicists. But someone on that committee should also know how terribly dangerous it is to have an answer in your mind before you finish asking the question.

Someone on that committee should remember the reply of Enrico Fermi to Leo Szilard’s proposal that a fission chain reaction could be used to build nuclear weapons. The reply was “Nuts!” — Fermi considered the possibility so remote as to not be worth investigating. 

Someone should remember the history of errors in physics calculations, such as the Castle Bravo nuclear test that produced a 15-megaton explosion, instead of 4 to 8, because of an unconsidered reaction in lithium-7. They correctly solved the wrong equation, failed to think of all the terms that needed to be included, and at least one person in the expanded fallout radius died. 

Someone should remember Lord Kelvin’s careful proof, using multiple, independent quantitative calculations from well-established theories, that the Earth could not possibly have existed for as long as forty million years.

Someone should know that when an expert says the probability is “a million to one,” without using actuarial data or calculations from a precise, precisely confirmed model, the calibration is probably more like twenty to one.

EA Handbook2
Personal Blog

9