I've seen excerpts of this podcast being shared widely on Twitter, and people seem to have found it valuable, so I figured I'd have it exist on the Forum as well.

The general Twitter consensus amongst economists seemed to be that Deaton's point of view is too extreme or mistaken, but I can't claim to follow a balanced selection of Twitter economists, and I'm not trying to argue anything here; I'm just posting the conversation.

This post mostly shares excerpts from the second half of the podcast. See the link above for the full transcript.

 The podcast begins with a discussion of U.S. mortality trends and Deaton's work on "deaths of despair". My excerpts focus on other topics, but if you want to learn more about this topic, the podcast seems like a reasonable starting point.

Now, onto the more EA-related part of the discussion:

Angus Deaton: So I have no problem with altruism. It's the effectiveness that I have [issue with]. And when I listened to Peter talking about how easy it is to do these things, and the only thing that people have been doing wrong before was they weren't doing randomized controlled trials, then phase two randomized controlled trials, then we can find out what works… and to me that's just nonsense. And I don't think randomized controlled trials are capable of doing that. I don't like the way GiveWell uses them. But what you said is more fundamental. I think giving aid in other countries from outside is almost always a mistake.

Julia Galef: And are you talking about aid to governments, or are you talking about also aid to individuals?

Angus Deaton: Both. [...] one of the analogies I'd like to give is suppose that you're living somewhere, and someone moves in next door. And this person who moves in next door, who has a wife who lives with them, is someone really detestable. And he treats his wife like a slave, and gives her just enough to eat, makes her life totally miserable. 


Then the question is: You would really like to help this woman, who's truly miserable. And you ask yourself, "Okay, I have a randomized control trial that shows when I give women money, they do better.”

Well, do you think it would be a good idea to give this woman money? Well, of course not, because her husband would just say, "Thank you very much,” and take it.

Now, you might be able to do better by giving him money, which is the opposite of what the effective altruists like to say. The husband is like the government here, because the government has control over its people. And so if you give any significantly large sums to poor people, it's just one more source of revenue for the government, because the government is in business of extracting and exploiting from its own people. They're not trying to help them. If they were trying to help them, the woman wouldn't be miserable in the first place, and these people would not be so poor. So a lot of the problems in a lot of those countries is government that is dictatorial, extractive, and is basically plundering its own people. And the danger of giving aid to either the people or to the country is you make that worse, not better. 

Julia Galef: And you're saying that that happens even in the case of, say, giving out antimalarial bed nets? How would that happen?

Angus Deaton: Well, I've always argued the health side of this is probably less subject to this critique, but the government is providing health services anyway. And you're only going to have a healthy society and the good health system in those countries if the government provides it, and if there's a consensus by the people that they want it provided. So the big problem with providing health services from outside is that they make the indigenous health services much worse.

Julia Galef: And is that a conjecture that sounds plausible, or is that something we have evidence of?

Angus Deaton: I think there's plenty of evidence of it over the years, because there's always been this debate in the global health community between these external innovations where people parachute in and inject people, or maybe you give them bed nets. So the bed nets are a sort of intermediate case.


Julia Galef: Oh, so you're saying that these interventions are helpful, they just didn't solve the biggest problem?

Angus Deaton: That's right. Well, they solved a big problem. Life expectancy went up by leaps and bounds in poor countries after the second World War, largely because of these external innovations. And so we credit those with [a lot]. 

But if you're trying to provide healthcare... And remember, providing healthcare is incredibly difficult. We're really bad at it in this country, let alone in countries that just don't have the resources we do. And so it's a very difficult problem, but I think interventions from the outside of providing clinics and manning clinics and so on are likely to have unmeasured side effects, and those side effects are never taken into account in the randomized controlled trials either.


You have to do a much more serious job of looking at what happens. And that's why in my book, The Great Escape, what I argue for is that there's a huge number of things that we could do to help those people. 

For instance, how about the arms trade? When I talked to Peter Singer, I said, "Why didn't you ever say anything about the arms trade?" He said, "Well, that's too hard." 

Well, maybe. But if Peter and all the other effective altruists were to go to Canberra or go to Washington or go to London or go to the cities where they have some standing to speak, and speak up against the arms trade, then I think we'd do a lot more good than digging wells in the Sahel.

Julia Galef: I see. So your view is that… it's not that effective altruist interventions don't do good. You just think that we could do more good if the people attracted to effective altruism would turn to political influence and activism.

Angus Deaton: Well, that's true, but it's more specific than that. I think Jagdish Bhagwati was the first to use the phrase. He said, "I believe in giving help for Africa, not help in Africa."

Julia Galef: What does help "for" Africa consist of?

Angus Deaton: Trade policy, for instance, making it easier for African countries to sell their goods here, not putting punitive patents on drugs. There's whole lot of things. The secret is not to go in there with money which will screw up the equilibrium between the government and the governed. You're not going to get development unless there's a government that voluntarily raises money from its people and uses it to benefit them. Most aid from the outside will severely interfere with that. There are lots of countries in Africa where more than 100% of government revenue is coming from abroad. There's no accountability of their own citizens. 

And effective altruists make that worse.

Julia Galef: Well, the thing that's still unclear to me is whether... I don't really disagree with your picture of aid in general, but it seems to me that the specific, targeted interventions that effective altruists tend to favor don't have the baggage attached, and the problems that you're talking about, that apply to most aid over time.


Maybe we should talk now about your critique of randomized controlled trials, or RCTs, because that type of evidence is one of the big things that effective altruists like GiveWell base their judgements on, their judgments about how to help people. And you've pretty famously written about why RCTs aren't so trustworthy.

Before reading your op-ed, I would have thought you would actually approve of the way GiveWell uses RCTs. So let me describe to you the way I see them using RCTs, and you can tell me if you actually do approve or not. 

So GiveWell's view is that most... I mean, I can't officially speak for them, but my perception of their view is that most research is pretty flawed, including the vast majority of randomized controlled trials. But that occasionally, you can have enough RCTs that are well done, in enough different contexts, that are looking at lots of different outcome measures, with a large enough effect size, that at that point you can be pretty confident that there's a real benefit there. 


Angus Deaton: You're making my hair stand on end. 

Julia Galef: So, you can't be 100% sure, but if you have a lot of different RCTs in different contexts with large effect sizes, then you can be probably confident enough to act on that. 

And so the small selection of charities that GiveWell recommends on their website are the exceptions to the rule. They're the cases where GiveWell thinks, "Okay, in this case, there actually is enough evidence that we feel comfortable recommending that people act on it, even though that is usually not the case." 

Maybe the way I misconstrued your view is that you don't think you can ever do that? Whereas in GiveWell's case, they think you can sometimes, occasionally do that. 

Angus Deaton: Well, I'm sure you can sometimes occasionally do it, but your language drives me bananas. 

Julia Galef: Okay, why? Which aspect? 

Angus Deaton: Well, for instance, replication tells you nothing. Think of all the white swans that there were in the world before the first black swan turned up. Read about Bertrand Russell's chicken.


That chicken hears the farmer coming every day and realizes after 300 or 400 replications, that every time the chicken hears the footsteps it's going to get fed, and gets very happy it hears the footsteps until Christmas Eve and when the farmer wrings his neck. 

And the moral of that story is, which I think maybe I can paraphrase Bertrand Russell's words: a deeper understanding of the nature of the world would have been useful to the chicken under these circumstances. The point is, replication doesn't tell you anything.

Julia Galef: So even if you did a thousand RCTs, in tons of different countries, and every time you found that cash transfers increased people's consumption and made them happier -- you would claim that you haven't learned anything? Because you can never be sure that in the “thousand and oneth” case that you wouldn’t find a negative effect?

Angus Deaton: That's right. That's right.

Julia Galef: I see. So, I think we have different --

Angus Deaton: Well, unless you could tell me why it's happening. I don't need a randomized controlled trial to tell me that if people get better off, they get happier. Which is what a lot of what the RCTs on cash transfers are doing. 

Julia Galef: I thought that actually was an open empirical question, where it seems very commonsense, but we've done research and it wasn't obviously going to turn out to be true.

Angus Deaton: I don't think so. I think those are RCTs on cash transfers are really silly.


The problem is that in some environments that's going to make people better off; in other environments it's not. And you're not going to get at that by doing replications of randomized controlled trials.

Because in some governments, they'd let people enjoy the money, in other places they wouldn't let them enjoy the money. And lots of other contingencies that are not taken into account. So you have to have a basic structure mechanism of what you think is going on here.

I'm not against randomized, controlled trials, but this idea that if you do them often enough, like the graduation experiment, that somehow it always works, is really preposterous, both logically and in practice.

And then you use the term effect sizes. Effect sizes is a completely disreputable statistical concept.


These people are using effect size all the time, because they want to compare things across countries. And you can't compare things across countries if they're in different currencies and if they're in different places. 

So they use effect sizes, and effect sizes robs the whole thing of meaning. You do a training program for people, a training program for dogs, and you could look at the effect size --

Julia Galef: Well, I don't know what the people you're complaining about are doing, but I imagine if you're testing a specific intervention -- like giving out anti-malarial bed nets -- the cases in different countries or different regions aren't going to be identical, but it's still pretty similar, what you're doing from one region to the other. You're giving out bed nets.

Angus Deaton: I don't agree, because all the side effects, which are the things we're talking about, are going to be different in each case. 

And also, just to take a case -- we know what reduces poverty, what makes people better off: it's school teachers, it's malaria pills, it's all these things. 

Julia Galef: How do we know that, though?

Angus Deaton: Oh, come on.

Julia Galef: No, I'm sorry, that was not a rhetorical or a troll question.

Angus Deaton: Really? I don't know how you get out of bed in the morning. How do you know that when you stand up, you won't fall over? I mean, there's been no experiments on that. There's never been an experiment on aspirin. Have you ever taken an aspirin?

Julia Galef: So, sorry, you think that increasing the number of schoolteachers -- or paying them better, or some intervention on school teachers causes people to be better off -- that that claim is as obvious as gravity

Angus Deaton: It's pretty obvious. But that's not the point I'm trying to make. The point I'm trying to make is if you send a bunch of people who do experiments, students from MIT or wherever, you do experiments in these countries, then, and I don't know if you know about the graduation program, but the graduation program is regarded as one of the great stars in the firmament of this thing. 


The graduation program is a program in a bunch of different countries, in which people are given some capital in the form of Guinea pigs, or chickens or sheep or something. They're also given advice on how to farm. And then they're revisited. Maybe they're given some money. I forget exactly the details.

And then you come back after a year or two years and see whether they're better off, whether they're earning more money, whether their enterprise is working and so on. So the idea is to try and get people over the hump, which otherwise is keeping them trapped in a poverty trap.

And they got pretty positive results in all but a couple of countries, and so they put a great weight on the replication. They do the standardized effect size, which I think is nonsense. 

But the point there is, the question is not whether those things can work. We're pretty sure that these things can work. The question is whether government civil servants or government employees working under all the usual constraints of employing workers and all the incentives that go with that, can actually do that.

And that comes to the crux of the matter, really. It's really whether the countries can do this for themselves. Because if we can develop general methods, of things that look like they're promising, then local people have to adapt them for themselves.

So this takes us back to where you started, which is this question, we've got to use local knowledge. We can send blueprints to places, they can look at it and say, "This is interesting, maybe this would work in our context if we adapted this." And that to me makes sense.

I'm just not persuaded by any number of randomized controlled trials, as they're usually run at least.


Sorted by Click to highlight new comments since: Today at 1:00 AM

Angus Deaton:

There's never been an experiment on aspirin.

Noah Smith:

If I go to PubMed and search for “aspirin randomized controlled trial”, I get 7,586 results. There are reportedly 700 to 1000 clinical trials conducted on aspirin every year. There were also experiments involved in the invention of aspirin; people knew that salicylic acid helped with headaches, but extracting and buffering the chemical were both non-trivial tasks.

I was looking all over Dylan Matthews' Twitter feed for this reply, and forgot it was actually Noah Smith's. Thanks for linking!


This kind of debate is why I'd like to see the next wave of Tetlock-style research focus on the predictive value of different types of evidence. We know a good bit now about the types of cognitive styles that are useful for predicting the future, and even for estimating causal effects in simulated worlds. But we still don't know that much about the kinds of evidence that help. (Base rates, sure, but what else?) Say you're trying to predict the outcome of an experiment. Is reading about a similar experiment helpful? Is descriptive data helpful? Is interviewing three people who've experienced the phenomenon? When is one more and less useful? It's time to take these questions about evidence from the realms of philosophy, statistical theory, and personal opinion and study them as social phenomena. And yes, that is circular because what kind of evidence on evidence counts? But I think we'd still benefit from knowing a lot more on the usefulness of different sorts of evidence and prediction tournaments would be a nice way to study their cash value.

It strikes me that Deaton has, in theory, got a point. To put a label on it, one should not do 'randomisation (or replication) without explanation'.  Regarding Russell's chicken, the flaw with the chicken assuming it will get fed today is because it hasn't understood the structure of reality. Yet this does not show one should, in practice, give up on RCTs and replication, only that one should use them in combination with a thoughtful understanding of the world. 

For Deaton's worry to have force, one would need to believe that because one context might be different from another, we should assume it is. Yet, saliently, that doesn't follow. There could be a fairly futile argument about on whom the burden of proof lies to show that one context of replication is relevantly like another, but it seems the dutiful next thing to do would for advocates to argue why they think it is and critics to argue why it isn't. 

I am intrigued by his separate point that getting governments to be more receptive to their citizens is a valuable intervention - the point being that, in poor countries, the governments collect so little tax from those in poverty they feel little incentive to notice them.


Is the topic of arms trade that he mentions considered in the EA community?