I want to draw attention to a tension effective altruists have not dealt with:

  1. Almost all of our written output takes as a strong assumption that economic growth and technological advancement are good things.
  2. Many intellectuals think this is actually unclear.
Why might economic growth or technological advancement be neutral, or even bad? Here are some possibilities:
  • We invent dangerous new technologies sooner, while society remains unwise, immature and unable to use them safely. Or new dangerous technologies advance from possibilities to realities more quickly, giving us less time to evaluate and limit their risks. For more on this see section 9.42, 3.
  • We become richer and this allows for e.g. more destructive conflicts (i.e. poor countries have weaker and less destructive armies).
  • Producing more wealth is currently doing more harm than good (e.g. via climate change, other environmental destruction, spread of factory farming or selfish materialism, etc).
The belief that economic growth is progressing too quickly and is going to destroy the natural resources we rely on is widely held, and these are variants on that theme of unintended consequences.

Why are we so cautious about raising these issues?
  • They violate common sense for most people.
  • The arguments in their favour are hard to explain quickly.
  • Over the last 200 years growth seems to have been a force for good; you look ignorant or deluded to suggest that something that was good in the past will not continue to be good in the future.
  • They involve speculation about the direction of future technologies that most people find unpersuasive and unrigorous.
  • They can have offensive implications, such as the idea that it would be better for people in poverty today to remain poor, and that the things most people do to improve the world aren't working or are even making things worse.
  • Our ability to further raise economic growth or technological advancement is small anyway, because billions of people are already pursuing those goals so we are a tiny fraction of the total.
  • Projects focussed on reducing poverty also: raise average global intelligence, education, income, governance, patience, and so on. These 'quality' effects may well dominate.
  • Other modelling suggests the overall effect is very unclear (e.g. wars seem to occur less frequently when economic growth is strong; faster growth lowers the number of years spent in any particular state of development, lowering so-called 'state risk'; some technologies clearly lower existing risks, e.g. we could now divert an asteroid away from Earth).
These seem like sound reasons not to make the risks of broad human empowerment a central part of our message. Some are also good reasons to think that economic growth is indeed more likely to be good than bad.

But I nonetheless feel uncomfortable sidestepping the issue entirely. 80,000 Hours currently highly recommends technological entrepreneurship as a way to do good directly. Can we do that in a good conscience without drawing people's attention to the ways that their work could make the future worse rather than better?

We should at least pose people the following question to help them improve the quality of the projects they decide to pursue:
  • Imagine that you somehow knew economic growth, or technological advancement, was merely neutral on average. While controversial, some smart people believe this to be true. Would your project nonetheless be one of those that is 'better than average' and therefore a force for good?
  • Some things that have been suggested to look good on the 'differential technological development' test include:
    • making people more cosmopolitan, kind and cautious;
    • improving the ability to coordinate countries and avoid e.g. prisoner's dilemmas;
    • increasing wisdom (especially the ability to foresee and solve future problems and conflicts);
    • predominantly reducing pressing existing risks, such as climate change;
    • predominantly empowering the people with the best values.
If your project passes this test, that's a sign it's robustly good. If your project only looks good if economic growth is overall a force for good, then it's on shakier ground.
Comments18


Sorted by Click to highlight new comments since:

Eliezer recently posted advice for central banks on how to accelerate economic growth. I'm not sure if that means he has changed his mind. (Maybe he's deliberately giving them bad advice.)

I don't think he or anyone expects them to listen to him.

[anonymous]2
0
0

A link to Paul Christiano's excellent 'On Progress and Prosperity' shouldn't be left out in this discussion:

http://effective-altruism.com/ea/9f/on_progress_and_prosperity/

Even if growth were bad or neutral, there would have to be specific activities that were bad, and other activities that remained good. So how does this differ from just telling folks to look for ways that their society might hurt itself, or ways that they might be contributing to this antisocial behavior? There is a lot of disagreement about which behaviors, exactly, are antisocial.

I do worry that given enough time, industrialized countries will, um, self-destruct by using nuclear weapons. But in that case the remedy would probably not be giving up industrialization. That seems like too high a cost.

It's also possible that growth may not be that important because growth is becoming much harder or impossible. But is it?

One point you make is that during the last 200 years growth has helped. Without strong evidence against it, it seems hard to make any assumption but that trends continue. So I think growth is good; growing societies will either be looked to and emulated by other groups that want the same rewards, or else powerful growing societies will just conquer other weaker ones. Either way, growth seems like the winning strategy.

Almost all of our written output takes as a strong assumption that economic growth and technological advancement are good things.

For what it's worth, I think this conclusion is extremely non-obvious and I'm somewhat disheartened when I see people taking it for granted. Most people are prone to optimism bias.

Why are we so cautious about raising these issues?

There may be a sampling bias here. People at Stanford EA talk about these issues, and I read about them online all the time. I haven't interacted much with CEA/Oxford people but my impression is you guys are a lot less willing to acknowledge that anything might be harmful, and less willing to discuss weird ideas.

"People at Stanford EA talk about these issues, and I read about them online all the time."

I've visited virtually every EA chapter and I think Stanford is the single most extreme one in this regard.

I don't want to interpret that post on flow-through effects as representing anything other than Holden's personal opinion, but it does strike me as pretty naive (in the mathematical sense of "you only thought of the most obvious conclusion and didn't go into any depth on this"). GiveWell's lack of (public) reasoning on flow-through effects is a large part of why I don't follow its charity recommendations.

The post on differential progress is a step in the right direction, and I'm generally more confident that Nick Beckstead is thinking correctly about flow-through effects than I am about anyone else at GiveWell.

EDIT: To Holden's credit, he does discuss how global catastrophic risks could make technological/economic harmful, so it's not like he hasn't thought about this at all.

The level of confidence in 'broad empowerment' as a force for good has always been my biggest disagreement with GiveWell.

Nice post. Can I suggest you're missing the most obvious one from your test?

How about "making people happier"?

which you could rephrase as

"reducing suffering/connecting people/empowering people to life the lives they want."

I'm one of those (controversial?) people who thinks most economic and technological development is morally neutral and does surprisingly little to make people's lives better, largely because people adapt to it and doesn't make a difference over the long run. I'm actually planning to make this argument in a longer post soon as I also think it's something of a neglected issue.

"reducing suffering/connecting people/empowering people to life the lives they want."

Are you saying that's probably an example of positive differential progress, or that because it's good in the immediate term, it should be good overall?

If the former could you flesh out the reason?

To me development of category "reducing suffering/connecting" is the most interesting and meaningful as a seed from which everything optimally (effectively) grows. I wish to see at least 1 out of 100 intellectuals to ask question about hierarchy of life purpose and meaning, and then what makes their life effective relatively to life purpose.

If it's not a force for good, and if you believe investment banking and similar roles damage the economy, that makes earning to give via them look more attractive.

Perhaps this can be an area of work for the Good Technology Project?

Other modelling suggests the overall effect is very unclear

Can I ask what modelling/whose?

Some are also good reasons to think that economic growth is indeed more likely to be good than bad.

True, although only the last two even seem valid to me.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f