by [anonymous]
1 min read 7

48

Nobel Prize winning economist William Nordhaus has written a paper called 'Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth'. NBER here and 2021 published paper.

He discusses various tests of whether the singularity - a large trend break in economic growth - is near. He argues that the tests suggest that the singularity is not near, i.e. not before 2100. I would be interested to hear what people think about whether this is a good test of AI timeline predictions. 

48

0
0

Reactions

0
0
Comments7


Sorted by Click to highlight new comments since:

The relevant section is VII. Summarizing the six empirical tests:

  1. You'd expect productivity growth to accelerate as you approach the singularity, but it is slowing.
  2. The capital share should approach 100% as you approach the singularity. The share is growing, but at the slow rate of ~0.5%/year. At that rate it would take roughly 100 years to approach 100%.
  3. Capital should get very cheap as you approach the singularity. But capital costs (outside of computers) are falling relatively slowly.
  4. The total stock of capital should get large as you approach the singularity. In fact the stock of capital is slowly falling relative to output.
  5. Information should become an increasingly important part of the capital stock as you approach the singularity. This share is increasing, but will also take >100 years to become dominant.
  6. Wage grow should accelerate as you approach the singularity, but it is slowing.

I would group these into two basic classes of evidence:

  • We aren't getting much more productive, but that's what a singularity is supposed to be all about.
  • Capital and IT extrapolations are potentially compatible with a singularity, but only a timescale of 100+ years.

I'd agree that these seem like two points of evidence against singularity-soon, and I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100. (Though I'd still have a meaningful probability soon, and even at 100 years the prospect of a singularity would be one of the most important facts about the basic shape of the future.)

There are some more detailed aspects of the model that I don't buy, e.g. the very high share of information capital and persistent slow growth of physical capital. But I don't think they really affect the bottom line.

[anonymous]11
0
0

Thanks for outlining the tests.

I'm not really sure what he thinks the probability of the singularity before 2100 is. My reading was that he probably doesn't think that given his tests, the singularity is (eg) >10% likely before 2100. 2 of the 7 tests suggest the singularity after 100 years and 5 of them fail. It might be worth someone asking him for his view on that

I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100.

To what extent is this a repudiation of Roodman's outside-view projection? My guess is you'd say something like "This new paper is more detailed and trustworthy than Roodman's simple model, so I'm assigning it more weight, but still putting a decent amount of weight on Roodman's being roughly correct and that's why I said <50% instead of <10%."

  1. I think that acceleration is autocorrelated---if things are accelerating rapidly at time T they are also more likely to be accelerating rapidly at time T+1. That's intuitively pretty likely, and it seems to show up pretty strongly in the data. Roodman makes no attempt to model it, in the interest of simplicity and analytical tractability. We are currently in a stagnant period, and so I think you should expect continuing stagnation. I'm not sure exactly how large the effect (and obviously it depends on the model) is but I think it's at least a 20-40 year delay. (There are two related angles to get a sense for the effect: one is to observe that autocorrelations seem to fade away on the timescale of a few doublings, rather than being driven by some amount of calendar time, and the other is to just look at the fact that we've had something like ~40 years of relative stagnation.)
  2. I think it's plausible that historical acceleration is driven by population growth, and that just won't really happen going forward. So at a minimum we should be uncertain betwe3en roodman's model and one that separates out population explicitly, which will tend to stagnate around the time population is limited by fertility rather than productivity.

(I agree with Max Daniel below that I don't think that Nordhaus' methodology is inherently more trustworthy. I think it's dealing with a relatively small amount of pretty short-term data, and is generally using a much more opinionated model of what technological change would look like.)

I don't think this would be a good reaction because:

  • Nordhaus's paper was only formally published now, but isn't substantially newer than Roodman's work. Nordhaus's paper was available as NBER working paper since at least 2018, and has been widely discussed among longtermists since then (e.g. I remember a conversation in fall 2018, there may have been earlier ones). [ETA: Actually Nordhaus's paper has circulated as a working/discussion paper since at least September 2015, and was e.g. mentioned in this piece of longtermist work from 2017.]
  • I've only had the chance to skim Roodman's work, but my quick impression is that it isn't straightforwardly the case that Nordhaus's model is "more detailed and trustworthy". Rather, it seems to me that both models are more detailed along different dimensions: Roodman's model explicitly incorporates noise/stochasticity, and in this sense is significantly more mathematically complex/sophisticated. On the other hand, Nordhaus's model incorporates more theoretical assumptions, e.g. about different types of "factors of production" and their relationship as represented by a "production function", similar to typical economic growth models. (Whereas Roodman is mostly fitting a model to a trend of a single quantity, in a way that's more agnostic about the theoretical mechanisms generating that trend.)
[anonymous]9
0
0

As a matter of interest, where do papers such as this usually get discussed? Is it in personal conversation or in some particular online location?

I think in this case mostly informal personal conversations (which can include conversations e.g. within particular org's Slack groups or similar). It might also have been a slight overstatement that the paper was "widely discussed" - this impression might be due to a "selection effect" of me having noticed the paper early and being interested in such work.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig