T

trammell

2171 karmaJoined

Bio

Postdoc at the Digital Economy Lab, Stanford, and research affiliate at the Global Priorities Institute, Oxford. I'm slightly less ignorant about economic theory than about everything else.

https://philiptrammell.com/

Sequences
1

The Ambiguous Economics of Full Automation

Comments
175

I’m not defending AI risk reduction, nor even longtermism. I’m arguing only that David Thorstad’s claim in “The Scope of Longtermism” was rebutted before it was written. 

Almost all longtermists think that some interventions are better than asteroid monitoring. To be conservative and argue that longtermism is true even if one disagrees with the harder-to-quantify interventions most longtermists happen to favor, the Case uses an intervention with low but relatively precise impact, namely asteroid monitoring, and argues that it does more than 2x as much good in the long term as the top GiveWell charity does in the short term. 

This is a non-sequitur. The “scope” which he claims is narrow is a scope of decision situations, and in every decision situation involving where to give a dollar, we can give it to asteroid monitoring efforts. 

Answer by trammell98
16
5
5

I know David well, and David, if you're reading this, apologies if it comes across as a bit uncharitable. But as far as I've ever been able to see, every important argument he makes in any of his papers against longtermism or the astronomical value of x-risk reduction was refuted pretty unambiguously before it was written. An unfortunate feature of an objection that comes after its own rebuttal is that sometimes people familiar with the arguments will skim it and say "weird, nothing new here" and move on, and people encountering it for the first time will think no response has been made.

For example,[1] I think the standard response to his arguments in "The Scope of Longtermism" would just be the Greaves and MacAskill "Case for Strong Longtermism".[2] The Case, in a nutshell, is that by giving to the Planetary Society or B612 Foundation to improve our asteroid/comet monitoring, we do more than 2x as much good in the long term, even on relatively low estimates of the value of the future, than giving to the top GiveWell charity does in the short term. So if you think GiveWell tells us the most cost-effective way to improve the short term, you have to think that, whenever your decision problem is "where to give a dollar", the overall best action does more good in the long term than in the short term.

You can certainly disagree with this argument on various grounds--e.g. you can think that non-GiveWell charities do much more good in the short term, or that the value of preventing extinction by asteroid is negative, or for that matter that the Planetary Society or B612 Foundation will just steal the money--but not with the arguments David offers in "The Scope of Longtermism".

His argument [again, in a nutshell] is that there are three common "scope-limiting phenomena", i.e. phenomena that make it the case that the overall best action does more good in the long term than in the short term in relatively few decision situations. These are

  1. rapid diminution (the positive impact of the action per unit time quickly falls to 0),
  2. washing out (the long-term impact of the action has positive and negative features which are hard to predict and cancel out in expectation), and
  3. option unawareness (there's an action that would empirically have large long-term impact but we don't know what it is).

He grants that when Congress was deciding what to do with the money that originally went into an asteroid monitoring program called the Space Guard Survey, longtermism seems to have held. So he's explicitly not relying on an argument that there isn't much value to trying to prevent x-risk from asteroids. Nevertheless, he never addresses the natural follow-up regarding contributing to improved asteroid monitoring today.

Re (1), he cites Kelly (2019) and Sevilla (2021) as reasons to be skeptical of claims from the "persistence" literature about various distant cultural, technological, or military developments having had long-term effects on the arc of history. Granting this doesn't affect the Case that whenever your decision problem is "where to give a dollar", the overall best action does more good in the long term than in the short term.[3]

Re (2), he says that we often have only weak evidence about a given action's impact on the long-term future. He defends this by pointing out (a) that attempts to forecast actions' impacts on a >20 year timescale have a mixed track record, (b) that professional forecasters are often skeptical of the ability to make such forecasts, and (c) that the overall impact of an action on the value of world is typically composed of its impacts on various other variables (e.g. the number of people and how well-off they are), and since it's hard to forecast any of these components, it's typically even harder to forecast the action's impact on value itself. None of this applies to the Case. We can grant that most actions have hard-to-predict long-term consequences, and that forecasters would recognize this, without denying that in most decision-situations (including all those where the question is where to give a dollar), there is one action that has long-term benefits more than 2x as great as the short-term benefits of giving to the top GiveWell charity: namely the action of giving to the Planetary Society or B612 Foundation. There is not a mixed track record of forecasting the >20 year impact of asteroid/comet monitoring, and no evidence that professional forecasters are skeptical of making such forecasts, and he implicitly grants that the complexity of forecasting its long-term impact on value isn't an issue in this case when it comes to the Space Guard Survey.

Re (3), again, the claim the Case makes is that we have identified one such action.

  1. ^

    I also emailed him about an objection to his "Existential Risk Pessimism and the Time of Perils" in November and followed up in February, but he's responded only to say that he's been too busy to consider it.

  2. ^

    Which he cites! Note that Greaves and MacAskill defend a stronger view than the one I'm presenting here, in particular that all near-best actions do much more good in the long term than in the short term. But what David argues against is the weaker view I lay out here.

  3. ^

    Incidentally, he cites the fact that "Hiroshima and Nagasaki returned to their pre-war population levels by the mid-1950s" as an especially striking illustration of lack of persistence. But as I mentioned to him at the time, it's compatible with the possibility that those regions have some population path, and we "jumped back in time" on it, such that from now on the cities always have about as many people at t as they would have had at t+10. If so, bombing them could still have most of its effects in the future.

In Young's case the exponent on ideas is one, and progress looks like log(log(researchers)). (You need to pay a fixed cost to make the good at all in a given period, so only if you go above that do you make positive progress.) See Section 2.2.

Peretto (2018) and Massari and Peretto (2025) have SWE models that I think do successfully avoid the knife-edge issue (or "linearity critique"), but at the cost of, in some sense, digging the hole deeper when it comes to the excess variety issue.

Thanks!

And yeah, that's fair. One possible SWE-style story I sort of hint at there is that we have preferences like the ones I use in the horses paper; process efficiency for any given product grows exponentially with a fixed population; and there are fixed labor costs to producing any given product. In this case, it's clear that measured GDP/capita growth will be exponential (but all "vertical") with a fixed population. But if you set things up in just the right way, so that measured GDP always increases by the same proportion when the range of products increases by some marginal proportion, it will also be exponential with a growing population ("vertical"+"horizontal").

But I think it's hard to not have this all be a bit ad-hoc / knife-edge. E.g. you'll typically have to start out ever less productive at making the new products, or else the contribution to real GDP of successive % increases in the product range will blow up: as you satiate in existing products, you're willing to trade ever more of them for a proportional increase in variety. Alternatively, you can say that the range of products grows subexponentially when the population grows exponentially, because the fixed costs of the later products are higher.

A bit tangential, but I can't help sharing a data point I came across recently on how prepared the US government currently is for advanced AI: our secretary of education apparently thinks it stands for "A1", like the steak sauce (h/t). (On the bright side, of course, this is a department the administration is looking to shut down.)

(FYI though I think we've chatted about several new varieties issues that I think could come up in the event of a big change in "growth mode", and this post is just about one of them.)

Thanks! People have certainly argued at least since Marx that if the people owning the capital get all the income, that will affect the state. I think more recent/quantitative work on this, e.g. by Stiglitz, has generally focused on the effects of inequality in wealth or income, rather than the effects of inequality via a high capital share per se. But this isn't my area at all—ask your favorite LLM : )

The reference point argument is also about consumption inequality rather than what gives rise to it. My guess would be that if we all really get radical life extension and a huge quantity of amazing goods and services, that will probably for most people outweigh whatever jealousy comes with the knowledge that others got more, but who knows.

In any event, my guess would be that even if the marginal product of labor stays high or rises following full automation, most people's incomes will eventually come not from wages, but from interest on whatever investments they have (even if they started small) or from redistribution. And full automation could well trigger so much redistribution that income inequality shrinks, since it will remove one motivation for letting income inequality remain high today, namely that unlike with robots, taxing productive people too much can discourage them from working as much.

Load more