I want to draw attention to a tension effective altruists have not dealt with:
- Almost all of our written output takes as a strong assumption that economic growth and technological advancement are good things.
- Many intellectuals think this is actually unclear.
Why might economic growth or technological advancement be neutral, or even bad? Here are some possibilities:
- We invent dangerous new technologies sooner, while society remains unwise, immature and unable to use them safely. Or new dangerous technologies advance from possibilities to realities more quickly, giving us less time to evaluate and limit their risks. For more on this see section 9.4; 2, 3.
- We become richer and this allows for e.g. more destructive conflicts (i.e. poor countries have weaker and less destructive armies).
- Producing more wealth is currently doing more harm than good (e.g. via climate change, other environmental destruction, spread of factory farming or selfish materialism, etc).
The belief that economic growth is progressing too quickly and is going to destroy the natural resources we rely on is widely held, and these are variants on that theme of unintended consequences.
Why are we so cautious about raising these issues?
- They violate common sense for most people.
- The arguments in their favour are hard to explain quickly.
- Over the last 200 years growth seems to have been a force for good; you look ignorant or deluded to suggest that something that was good in the past will not continue to be good in the future.
- They involve speculation about the direction of future technologies that most people find unpersuasive and unrigorous.
- They can have offensive implications, such as the idea that it would be better for people in poverty today to remain poor, and that the things most people do to improve the world aren't working or are even making things worse.
- Our ability to further raise economic growth or technological advancement is small anyway, because billions of people are already pursuing those goals so we are a tiny fraction of the total.
- Projects focussed on reducing poverty also: raise average global intelligence, education, income, governance, patience, and so on. These 'quality' effects may well dominate.
- Other modelling suggests the overall effect is very unclear (e.g. wars seem to occur less frequently when economic growth is strong; faster growth lowers the number of years spent in any particular state of development, lowering so-called 'state risk'; some technologies clearly lower existing risks, e.g. we could now divert an asteroid away from Earth).
These seem like sound reasons not to make the risks of broad human empowerment a central part of our message. Some are also good reasons to think that economic growth is indeed more likely to be good than bad.
But I nonetheless feel uncomfortable sidestepping the issue entirely. 80,000 Hours currently highly recommends technological entrepreneurship as a way to do good directly. Can we do that in a good conscience without drawing people's attention to the ways that their work could make the future worse rather than better?
We should at least pose people the following question to help them improve the quality of the projects they decide to pursue:
- Imagine that you somehow knew economic growth, or technological advancement, was merely neutral on average. While controversial, some smart people believe this to be true. Would your project nonetheless be one of those that is 'better than average' and therefore a force for good?
- Some things that have been suggested to look good on the 'differential technological development' test include:
- making people more cosmopolitan, kind and cautious;
- improving the ability to coordinate countries and avoid e.g. prisoner's dilemmas;
- increasing wisdom (especially the ability to foresee and solve future problems and conflicts);
- predominantly reducing pressing existing risks, such as climate change;
- predominantly empowering the people with the best values.
If your project passes this test, that's a sign it's robustly good. If your project only looks good if economic growth is overall a force for good, then it's on shakier ground.
Related: http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/
And http://www.overcomingbias.com/2009/12/tiptoe-or-dash-to-future.html
Also related, on facebook: https://www.facebook.com/yudkowsky/posts/10151665252179228
Eliezer recently posted advice for central banks on how to accelerate economic growth. I'm not sure if that means he has changed his mind. (Maybe he's deliberately giving them bad advice.)
I don't think he or anyone expects them to listen to him.
A link to Paul Christiano's excellent 'On Progress and Prosperity' shouldn't be left out in this discussion:
http://effective-altruism.com/ea/9f/on_progress_and_prosperity/
Even if growth were bad or neutral, there would have to be specific activities that were bad, and other activities that remained good. So how does this differ from just telling folks to look for ways that their society might hurt itself, or ways that they might be contributing to this antisocial behavior? There is a lot of disagreement about which behaviors, exactly, are antisocial.
I do worry that given enough time, industrialized countries will, um, self-destruct by using nuclear weapons. But in that case the remedy would probably not be giving up industrialization. That seems like too high a cost.
It's also possible that growth may not be that important because growth is becoming much harder or impossible. But is it?
One point you make is that during the last 200 years growth has helped. Without strong evidence against it, it seems hard to make any assumption but that trends continue. So I think growth is good; growing societies will either be looked to and emulated by other groups that want the same rewards, or else powerful growing societies will just conquer other weaker ones. Either way, growth seems like the winning strategy.
For what it's worth, I think this conclusion is extremely non-obvious and I'm somewhat disheartened when I see people taking it for granted. Most people are prone to optimism bias.
There may be a sampling bias here. People at Stanford EA talk about these issues, and I read about them online all the time. I haven't interacted much with CEA/Oxford people but my impression is you guys are a lot less willing to acknowledge that anything might be harmful, and less willing to discuss weird ideas.
"People at Stanford EA talk about these issues, and I read about them online all the time."
I've visited virtually every EA chapter and I think Stanford is the single most extreme one in this regard.
And GiveWell - their published statements on this matter basically just say they assume it's good: http://blog.givewell.org/2013/04/04/deep-value-judgments-and-worldview-characteristics/
With a little more detail: http://blog.givewell.org/2013/05/15/flow-through-effects/
But recently there was this cool post: http://blog.givewell.org/2015/09/30/differential-technological-development-some-early-thinking/
I don't want to interpret that post on flow-through effects as representing anything other than Holden's personal opinion, but it does strike me as pretty naive (in the mathematical sense of "you only thought of the most obvious conclusion and didn't go into any depth on this"). GiveWell's lack of (public) reasoning on flow-through effects is a large part of why I don't follow its charity recommendations.
The post on differential progress is a step in the right direction, and I'm generally more confident that Nick Beckstead is thinking correctly about flow-through effects than I am about anyone else at GiveWell.
EDIT: To Holden's credit, he does discuss how global catastrophic risks could make technological/economic harmful, so it's not like he hasn't thought about this at all.
The level of confidence in 'broad empowerment' as a force for good has always been my biggest disagreement with GiveWell.
Nice post. Can I suggest you're missing the most obvious one from your test?
How about "making people happier"?
which you could rephrase as
"reducing suffering/connecting people/empowering people to life the lives they want."
I'm one of those (controversial?) people who thinks most economic and technological development is morally neutral and does surprisingly little to make people's lives better, largely because people adapt to it and doesn't make a difference over the long run. I'm actually planning to make this argument in a longer post soon as I also think it's something of a neglected issue.
"reducing suffering/connecting people/empowering people to life the lives they want."
Are you saying that's probably an example of positive differential progress, or that because it's good in the immediate term, it should be good overall?
If the former could you flesh out the reason?
To me development of category "reducing suffering/connecting" is the most interesting and meaningful as a seed from which everything optimally (effectively) grows. I wish to see at least 1 out of 100 intellectuals to ask question about hierarchy of life purpose and meaning, and then what makes their life effective relatively to life purpose.
If it's not a force for good, and if you believe investment banking and similar roles damage the economy, that makes earning to give via them look more attractive.
Perhaps this can be an area of work for the Good Technology Project?
Can I ask what modelling/whose?
True, although only the last two even seem valid to me.