All of Ben Garfinkel's Comments + Replies

Democratising Risk - or how EA deals with critics

I'm not familiar with Zoe's work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above,  and being stuck with only Zoe's word for their claims, anything from a named community member along the lines of "this person has done good research/has been intellectually honest" would be a big update for me…. [The post] strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their positi

... (read more)
6RAB12dThanks Ben! That's very helpful info. I'll edit the initial comment to reflect my lowered credence in exaggeration or malfeasance.
Why AI alignment could be hard with modern deep learning

FWIW, I haven't had this impression.

Single data point: In the most recent survey on community opinion on AI risk, I was in at least the 75th percentile for pessimism (for roughly the same reasons Lukas suggests below). But I'm also seemingly unusually optimistic about alignment risk.

I haven't found that this is a really unusual combo: I think I know at least a few other people who are unusually pessimistic about 'AI going well,' but also at least moderately optimistic about alignment.

(Caveat that my apparently higher level of pessimism could also be explai... (read more)

All Possible Views About Humanity's Future Are Wild

Thanks for the clarification! I still feel a bit fuzzy on this line of thought, but hopefully understand a bit better now.

At least on my read, the post seems to discuss a couple different forms of wildness: let’s call them “temporal wildness” (we currently live at an unusually notable time) and “structural wildness” (the world is intuitively wild; the human trajectory is intuitively wild).[1]

I think I still don’t see the relevance of “structural wildness,” for evaluating fishiness arguments. As a silly example: Quantum mechanics is pretty intuitively wild,... (read more)

Ben, that sounds right to me. I also agree with what Paul said. And my intent was to talk about what you call temporal wildness, not what you call structural wildness.

I agree with both you and Arden that there is a certain sense in which the "conservative" view seems significantly less "wild" than my view, and that a reasonable person could find the "conservative" view significantly more attractive for this reason. But I still want to highlight that it's an extremely "wild" view in the scheme of things, and I think we shouldn't impose an inordinate burden of proof on updating from that view to mine.

All Possible Views About Humanity's Future Are Wild

To say a bit more here, on the epistemic relevance of wildness:

I take it that one of the main purposes of this post is to push back against “fishiness arguments,” like the argument that Will makes in “Are We Living at the Hinge of History?

The basic idea, of course, is that it’s a priori very unlikely that any given person would find themselves living at the hinge of history (and correctly recognise this). Due to the fallibility of human reasoning and due to various possible sources of bias, however, it’s not as unlikely that a given person would mistakenl... (read more)

We were previously comparing two hypotheses:

  1. HoH-argument is mistaken
  2. Living at HoH

Now we're comparing three:

  1. "Wild times"-argument is mistaken
  2. Living at a wild time, but HoH-argument is mistaken
  3. Living at HoH

"Wild time" is almost as unlikely as HoH. Holden is trying to suggest it's comparably intuitively wild, and it has pretty similar anthropic / "base rate" force.

So if your arguments look solid,  "All futures are wild" makes hypothesis 2 look kind of lame/improbable---it has to posit a flaw in an argument, and also that you are living at a wildly improb... (read more)

7ofer6moI think the more decision relevant probabilities involve "Someone believes they should act as if they live at the HoH" rather than "Someone believes they live at the HoH". Our actions may be much less important if 'this is all a dream/simulation' (for example). We should make our decisions in the way we wish everyone-similar-to-us-across-the-multiverse make their decisions. As an analogy, suppose Alice finds herself getting elected as the president of the US. Let's imagine there are 10100 citizens in the US. So Alice reasons that it's way more likely that she is delusional than she actually being the president of the US. Should she act as if she is the president of the US anyway, or rather spend her time trying to regain her grip on reality? The 10100 citizens want everyone in her situation to choose the former. It is critical to have a functioning president. And it does not matter if there are many delusional citizens who act as if they are the president. Their "mistake" does not matter. What matters is how the real president acts.
All Possible Views About Humanity's Future Are Wild

Some possible futures do feel relatively more "wild” to me, too, even if all of them are wild to a significant degree. If we suppose that wildness is actually pretty epistemically relevant (I’m not sure it is), then it could still matter a lot if some future is 10x wilder than another.

For example, take a prediction like this:

Humanity will build self-replicating robots and shoot them out into space at close to the speed of light; as they expand outward, they will construct giant spherical structures around all of the galaxy’s stars to extract tremendous v

... (read more)

To say a bit more here, on the epistemic relevance of wildness:

I take it that one of the main purposes of this post is to push back against “fishiness arguments,” like the argument that Will makes in “Are We Living at the Hinge of History?

The basic idea, of course, is that it’s a priori very unlikely that any given person would find themselves living at the hinge of history (and correctly recognise this). Due to the fallibility of human reasoning and due to various possible sources of bias, however, it’s not as unlikely that a given person would mistakenl... (read more)

Taboo "Outside View"

I suspect you are more broadly underestimating the extent to which people used "insect-level intelligence" as a generic stand-in for "pretty dumb," though I haven't looked at the discussion in Mind Children and Moravec may be making a stronger claim.

I think that's good push-back and a fair suggestion: I'm not sure how seriously the statement in Nick's paper was meant to be taken. I hadn't considered that it might be almost entirely a quip. (I may ask him about this.)

Moravec's discussion in Mind Children is similarly brief: He presents a graph of the co... (read more)

I do think my main impression of insect <-> simulated robot parity comes from very fuzzy evaluations of insect motor control vs simulated robot motor control (rather than from any careful analysis, of which I'm a bit more skeptical though I do think it's a relevant indicator that we are at least trying to actually figure out the answer here in a way that wasn't true historically). And I do have only a passing knowledge of insect behavior, from watching youtube videos and reading some book chapters about insect learning. So I don't think it's unfair to put it in the same reference class as Rodney Brooks' evaluations to the extent that his was intended as a serious evaluation.

Taboo "Outside View"

As a last thought here (no need to respond), I thought it might useful to give one example of a concrete case where: (a) Tetlock’s work seems relevant, and I find the terms “inside view” and “outside view” natural to use, even though the case is relatively different from the ones Tetlock has studied; and (b) I think many people in the community have tended to underweight an “outside view.”

A few years ago, I pretty frequently encountered the claim that recently developed AI systems exhibited roughly “insect-level intelligence.” This claim was typically used... (read more)

The Nick Bostrom quote (from here) is:

In retrospect we know that the AI project couldn't possibly have succeeded at that stage. The hardware was simply not powerful enough. It seems that at least about 100 Tops is required for human-like performance, and possibly as much as 10^17 ops is needed. The computers in the seventies had a computing power comparable to that of insects. They also achieved approximately insect-level intelligence.

I would have guessed this is just a funny quip, in the sense that (i) it sure sounds like it's just a throw-away quip, no e... (read more)

Taboo "Outside View"

Thank you (and sorry for my delayed response)!

I shudder at the prospect of having a discussion about "Outside view vs inside view: which is better? Which is overrated and which is underrated?" (and I've worried that this thread may be tending in that direction) but I would really look forward to having a discussion about "let's look at Daniel's list of techniques and talk about which ones are overrated and underrated and in what circumstances each is appropriate."

I also shudder a bit at that prospect.

I am sometimes happy making pretty broad and sloppy ... (read more)

2kokotajlod6moI guess we can just agree to disagree on that for now. The example statement you gave would feel fine to me if it used the original meaning of "outside view" but not the new meaning, and since many people don't know (or sometimes forget) the original meaning... 100% agreement here, including on the bolded bit. Also agree here, but again I don't really care which one is overall more problematic because I think we have more precise concepts we can use and it's more helpful to use them instead of these big bags. I think I agree with all this as well, noting that this causal/deductive reasoning definition of inside view isn't necessarily what other people mean by inside view, and also isn't necessarily what Tetlock meant. I encourage you to use the term "causal/deductive reasoning" instead of "inside view," as you did here, it was helpful (e.g. if you had instead used "inside view" I would not have agreed with the claim about baseline bias)

As a last thought here (no need to respond), I thought it might useful to give one example of a concrete case where: (a) Tetlock’s work seems relevant, and I find the terms “inside view” and “outside view” natural to use, even though the case is relatively different from the ones Tetlock has studied; and (b) I think many people in the community have tended to underweight an “outside view.”

A few years ago, I pretty frequently encountered the claim that recently developed AI systems exhibited roughly “insect-level intelligence.” This claim was typically used... (read more)

Ben Garfinkel's Shortform

I'm not sure if you think this is an interesting point to notice that's useful for building a world-model, and/or a reason to be skeptical of technical alignment work. I'd agree with the former but disagree with the latter.

Mostly the former!

I think the point may have implications for how much we should prioritize alignment research, relative to other kinds of work, but this depends on what the previous version of someone's world model was.

For example, if someone has assumed that solving the 'alignment problem' is close to sufficient to ensure that human... (read more)

Taboo "Outside View"

It’s definitely entirely plausible that I’ve misunderstood your views.

My interpretation of the post was something like this:

There is a bag of things that people in the EA community tend to describe as “outside views.” Many of the things in this bag are over-rated or mis-used by members of the EA community, leading to bad beliefs.

One reason for this over-use or mis-use is that the the term “outside view” has developed an extremely positive connotation within the community. People are applauded for saying that they’re relying on “outside views” — “outside

... (read more)
8kokotajlod7moWow, that's an impressive amount of charitable reading + attempting-to-ITT you did just there, my hat goes off to you sir! I think that summary of my view is roughly correct. I think it over-emphasizes the applause light aspect compared to other things I was complaining about; in particular, there was my second point in the "this expansion of meaning is bad" section, about how people seem to think that it is important to have an outside view and an inside view (but only an inside view if you feel like you are an expert) which is, IMO, totally not the lesson one should draw from Tetlock's studies etc., especially not with the modern, expanded definition of these terms. I also think that while I am mostly complaining about what's happened to "outside view," I also think similar things apply to "inside view" and thus I recommend tabooing it also. In general, the taboo solution feels right to me; when I imagine re-doing various conversations I've had, except without that phrase, and people instead using more specific terms, I feel like things would just be better. I shudder at the prospect of having a discussion about "Outside view vs inside view: which is better? Which is overrated and which is underrated?" (and I've worried that this thread may be tending in that direction) but I would really look forward to having a discussion about "let's look at Daniel's list of techniques and talk about which ones are overrated and underrated and in what circumstances each is appropriate." Now I'll try to say what I think your position is: How does that sound?
Taboo "Outside View"

On the contrary; tabooing the term is more helpful, I think. I've tried to explain why in the post. I'm not against the things "outside view" has come to mean; I'm just against them being conflated with / associated with each other, which is what the term does. If my point was simply that the first Big List was overrated and the second Big List was underrated, I would have written a very different post!

My initial comment was focused on your point about conflation, because I think this point bears on the linguistic question more strongly than the other p... (read more)

2kokotajlod7moI said in the post, I'm a fan of reference classes. I feel like you think I'm not? I am! I'm also a fan of analogies. And I love trend extrapolation. I admit I'm not a fan of the anti-weirdness heuristic, but even it has its uses. In general most of what you are saying in this thread is stuff I agree with, which makes me wonder if we are talking past each other. (Example 1: Your second small comment about reference class tennis. Example 2: Your first small comment, if we interpret instances of "outside view" as meaning "reference classes" in the strict sense, though not if we use the broader definition you favor. Example 3: your points a, b, c, and e. (point d, again, depends on what you mean by 'outside view,' and also what counts as often.) My problem is with the term "Outside view." (And "inside view" too!) I don't think you've done much to argue in favor of it in this thread. You have said that in your experience it doesn't seem harmful; fair enough, point taken. In mine it does. You've also given two rough definitions of the term, which seem quite different to me, and also quite fuzzy. (e.g. if by "reference class forecasting" you mean the stuff Tetlock's studies are about, then it really shouldn't include the anti-weirdness heuristic, but it seems like you are saying it does?) I found myself repeatedly thinking "but what does he mean by outside view? I agree or don't agree depending on what he means..." even though you had defined it earlier. You've said that you think the practices you call "outside view" are underrated and deserve positive reinforcement; I totally agree that some of them are, but I maintain that some of them are overrated, and would like to discuss each of them on a case by case basis instead of lumping them all together under one name. Of course you are free to use whatever terms you like, but I intend to continue to ask people to be more precise when I hear "outside view" or "inside view." :)
Taboo "Outside View"

I agree that people sometimes put too much weight on particular outside views -- or do a poor job of integrating outside views with more inside-view-style reasoning. For example, in the quote/paraphrase you present at the top of your post, something has clearly gone wrong.[1]

But I think the best intervention, in this case, is probably just to push the ideas "outside views are often given too much weight" or "heavily reliance on outside views shouldn't be seen as praiseworthy" or "the correct way to integrate outside views with more inside-view reasoning is... (read more)

9kokotajlod7moOn the contrary; tabooing the term is more helpful, I think. I've tried to explain why in the post. I'm not against the things "outside view" has come to mean; I'm just against them being conflated with / associated with each other, which is what the term does. If my point was simply that the first Big List was overrated and the second Big List was underrated, I would have written a very different post! By what definition of "outside view?" There is some evidence that in some circumstances people don't take reference class forecasting seriously enough; that's what the original term "outside view" meant. What evidence is there that the things on the Big List O' Things People Describe as Outside View are systematically underrated by the average intellectual?
Taboo "Outside View"

When people use “outside view” or “inside view” without clarifying which of the things on the above lists they mean, I am left ignorant of what exactly they are doing and how well-justified it is. People say “On the outside view, X seems unlikely to me.” I then ask them what they mean, and sometimes it turns out they are using some reference class, complete with a dataset. (Example: Tom Davidson’s four reference classes for TAI). Other times it turns out they are just using the anti-weirdness heuristic. Good thing I asked for elaboration!

FWIW, as a... (read more)

8kokotajlod7moThanks for this thoughtful pushback. I agree that YMMV; I'm reporting how these terms seem to be used in my experience but my experience is limited. I think opacity is only part of the problem; illicitly justifying sloppy reasoning is most of it. (My second and third points in "this expansion of meaning is bad" section.) There is an aura of goodness surrounding the words "outside view" because of the various studies showing how it is superior to the inside view in various circumstances, and because of e.g. Tetlock's advice to start with the outside view and then adjust. (And a related idea that we should only use inside view stuff if we are experts... For more on the problems I'm complaining about, see the meme, or Eliezer's comment.) This is all well and good if we use those words to describe what was actually talked about by the studies, by Tetlock, etc. but if instead we have the much broader meaning of the term, we are motte-and-bailey-ing ourselves.
What are things everyone here should (maybe) read?

Fortunately, if I remember correctly, something like the distinction between the true criterion of rightness and the best practical decision procedure actually is a major theme in the Kagan book. (Although I think the distinction probably often is underemphasized.)

It is therefore kind of misleading to think of consequentialism vs. deontology vs. virtue ethics as alternative theories, which however is the way normative ethics is typically presented in the analytic tradition.

I agree there is something to this concern. But I still wouldn't go so far as to... (read more)

6Max_Daniel8moYeah, I think these are good points. I also suspect that many deontologists and virtue ethicists would be extremely annoyed at my claim that they aren't alternative theories to consequentialism. (Though I also suspect that many are somewhat annoyed at the typical way the distinctions between these types of theories are described by philosophers in a broadly consequentialist tradition. My limited experience debating with committed Kantians suggests that disagreements seem much more fundamental than "I think the right action is the one with the best consequences, and you think there are additional determinants of rightness beyond axiology", or anything like that.)
What are things everyone here should (maybe) read?

A slightly boring answer: I think most people should at least partly read something that overviews common theories and frameworks in normative ethics (and the arguments for and against them) and something that overviews core concepts and principles in economics (e.g. the idea of expected utility, the idea of an externality, supply/demand, the basics of economic growth, the basics of public choice).

In my view, normative ethics and economics together make up a really large portion of the intellectual foundation that EA is built on.

One good book that overview... (read more)

I remember that reading up on normative ethics was one of the first things I focused on after I had encountered EA. I'm sure it was useful in many ways. For some reason, however, I feel surprisingly lukewarm about recommending that people read about normative ethics. 

It could be because my view these days is roughly: "Once you realize that consequentialism is great as a 'criterion of rightness' but doesn't work as 'decision procedure' for boundedly rational agents, a lot of the themes from deontology, virtue ethics, moral particularism, and moral plur... (read more)

Ben Garfinkel's Shortform

That's a good example.

I do agree that quasi-random variation in culture can be really important. And I agree that this variation is sometimes pretty sticky (e.g. Europe being predominantly Christian and the Middle East being predominantly Muslim for more than a thousand years). I wouldn't say that this kind of variation is a "rounding error."

Over sufficiently long timespans, though, I think that technological/economic change has been more significant.

As an attempt to operationalize this claim: The average human society in 1000AD was obviously very differen... (read more)

Ben Garfinkel's Shortform

FWIW, I wouldn't say I agree with the main thesis of that post.

However, while I expect machines that outcompete humans for jobs, I don’t see how that greatly increases the problem of value drift. Human cultural plasticity already ensures that humans are capable of expressing a very wide range of values. I see no obviously limits there. Genetic engineering will allow more changes to humans. Ems inherit human plasticity, and may add even more via direct brain modifications.

In principle, non-em-based artificial intelligence is capable of expressing the enti

... (read more)
4abergal8moReally appreciate the clarifications! I think I was interpreting "humanity loses control of the future" in a weirdly temporally narrow sense that makes it all about outcomes, i.e. where "humanity" refers to present-day humans, rather than humans at any given time period. I totally agree that future humans may have less freedom to choose the outcome in a way that's not a consequence of alignment issues. I also agree value drift hasn't historically driven long-run social change, though I kind of do think it will going forward, as humanity has more power to shape its environment at will.
Ben Garfinkel's Shortform

Do you have the intuition that absent further technological development, human values would drift arbitrarily far?

Certainly not arbitrarily far. I also think that technological development (esp. the emergence of agriculture and modern industry) has played a much larger role in changing the world over time than random value drift has.

[E]ven non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise.

I definitely think that's true. But I also think that was true ... (read more)

Ben Garfinkel's Shortform

A thought on how we describe existential risks from misaligned AI:

Sometimes discussions focus on a fairly specific version of AI risk, which involves humanity being quickly wiped out. Increasingly, though, the emphasis seems to be on the more abstract idea of “humanity losing control of its future.” I think it might be worthwhile to unpack this latter idea a bit more.

There’s already a fairly strong sense in which humanity has never controlled its own future. For example, looking back ten thousand years, no one decided that the sedentary agriculture would i... (read more)

2Aaron Gertler4moWould you consider making this into a top-level post? The discussion here is really interesting and could use more attention, and a top-level post helps to deliver that (this also means the post can be tagged for greater searchability). I think the top-level post could be exactly the text here, plus a link to the Shortform version so people can see those comments. Though I'd also be interested to see the updated version of the original post which takes comments into account (if you felt like doing that).
9Max_Daniel7moI agree with most of what you say here. [ETA: I now realize that I think the following is basically just restating what Pablo already suggested in another comment [https://forum.effectivealtruism.org/posts/kLYD95SK8tQFRmw4T/ben-garfinkel-s-shortform?commentId=dG9Xr8D44Sb7zBHPh] .] I think the following is a plausible & stronger concern, which could be read as a stronger version of your crisp concern #3. "Humanity has not had meaningful control over its future, but AI will now take control one way or the other. Shaping the transition to a future controlled by AI is therefore our first and last opportunity to take control. If we mess up on AI, not only have we failed to seize this opportunity, there also won't be any other." Of course, AI being our first and only opportunity to take control of the future is a strictly stronger claim than AI being one such opportunity. And so it must be less likely. But my impression is that the stronger claim is sufficiently more important that it could be justified to basically 'wager' most AI risk work on it being true.
5rohinmshah7moI agree with this general point. I'm not sure if you think this is an interesting point to notice that's useful for building a world-model, and/or a reason to be skeptical of technical alignment work. I'd agree with the former but disagree with the latter.
5abergal8moDo you have the intuition that absent further technological development, human values would drift arbitrarily far? It's not clear to me that they would-- in that sense, I do feel like we're "losing control" in that even non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise. (It does also feel like we're missing the opportunity to "take control" and enable a new set of possibilities that we would endorse much more.) Relatedly, it doesn't feel to me like the values of humans 150,000 years ago and humans now and even ems in Age of Em are all that different on some more absolute scale.

Another interpretation of the concern, though related to your (3), is that misaligned AI may cause humanity to lose the potential to control its future. This is consistent with humanity not having (and never having had) actual control of its future; it only requires that this potential exists, and that misaligned AI poses a threat to it.

Ben Garfinkel's Shortform

Good point!

That consideration -- and the more basic consideration that more junior people often just know less -- definitely pushes in the opposite direction. If you wanted to try some version of seniority-weighted epistemic deference, my guess is that the most reliable cohort would have studied a given topic for at least a few years but less than a couple decades.

Ben Garfinkel's Shortform

A thought on epistemic deference:

The longer you hold a view, and the more publicly you hold a view, the more calcified it typically becomes. Changing your mind becomes more aversive and potentially costly, you have more tools at your disposal to mount a lawyerly defense, and you find it harder to adopt frameworks/perspectives other than your favored one (the grooves become firmly imprinted into your brain). At least, this is the way it seems and personally feels to me.[1]

For this reason, the observation “someone I respect publicly argued for X many years a... (read more)

At least in software, there's a problem I see where young engineers are often overly bought-in to hype trains, but older engineers (on average) stick with technologies they know too much.

I would imagine something similar in academia, where hot new theories are over-valued by the young, but older academics have the problem you describe.

Ben Garfinkel's Shortform

I’d actually say this is a variety of qualitative research. At least in the main academic areas I follow, though, it seems a lot more common to read and write up small numbers of detailed case studies (often selected for being especially interesting) than to read and write up large numbers of shallow case studies (selected close to randomly).

This seems to be true in international relations, for example. In a class on interstate war, it’s plausible people would be assigned a long analysis of the outbreak WW1, but very unlikely they’d be assigned short descriptions of the outbreaks of twenty random wars. (Quite possible there’s a lot of variation between fields, though.)

Ben Garfinkel's Shortform

In general, I think “read short descriptions of randomly sampled cases” might be an underrated way to learn about the world and notice issues with your assumptions/models.

A couple other examples:

I’ve been trying to develop a better understanding of various aspects of interstate conflict. The Correlates of War militarized interstate disputes (MIDs) dataset is, I think, somewhat useful for this. The project files include short descriptions of (supposedly) every case between 1993 and 2014 in which one state “threatened, displayed, or used force against anoth... (read more)

3Stefan_Schubert9moInteresting ideas. Some similarities with qualitative research [https://en.wikipedia.org/wiki/Qualitative_research], but also important differences, I think (if I understand you correctly).
Ben Garfinkel's Shortform

The O*NET database includes a list of about 20,000 different tasks that American workers currently need to perform as part of their jobs. I’ve found it pretty interesting to scroll through the list, sorted in random order, to get a sense of the different bits of work that add up to the US economy. I think anyone who thinks a lot about AI-driven automation might find it useful to spend five minutes scrolling around: it’s a way of jumping yourself down to a lower level of abstraction. I think the list is also a little bit mesmerizing, in its own right.

One up... (read more)

In general, I think “read short descriptions of randomly sampled cases” might be an underrated way to learn about the world and notice issues with your assumptions/models.

A couple other examples:

I’ve been trying to develop a better understanding of various aspects of interstate conflict. The Correlates of War militarized interstate disputes (MIDs) dataset is, I think, somewhat useful for this. The project files include short descriptions of (supposedly) every case between 1993 and 2014 in which one state “threatened, displayed, or used force against anoth... (read more)

4abergal9moI agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human's job. Presumably there's some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn't captured looking at the list of present-day O*NET tasks.
Is Democracy a Fad?

Thanks for the comment!

I think endnotes 12 and 13, within my cave of endnotes, may partly address this concern.

I don't think the prediction that the labor share will fall in the future depends on (a) the assumption that the amount of work to be done in the economy is constant, (b) the assumption that automation is currently reducing the demand for labor, or (c) the assumption that individual AI systems will tend to have highly general capabilities. I do agree that the first two assumptions are wrong. I also think the third assumption is very plausibly wr... (read more)

2evelynciara10moThanks for your very thorough response! I'm going to try to articulate my reasons for being skeptical based on what I understand about AI and econ (although I'm not an expert in either). And I'll definitely read the papers you linked when I have more time. I agree that it's theoretically possible to build AGI; as I like to put it, it's a no-brainer (pun very much intended). But I think that replicating the capabilities of the human brain will be very expensive. Even if algorithmic improvements drive down the amounts of compute needed for ML training and inference, I would expect narrow AI systems to be cheaper and easier to train than more general ones at any point in time. If you wanted to automate 3 different tasks, you would train 3 separate ML systems to do each of them, because you could develop them independently from each other. Whereas if you tried to train a single AI system to do all of them, I think it would be more complicated to ensure that it reaches the same performance as the collection of narrow AI systems, and it would require more compute. Also, if you wanted a general intelligence (whether a human or machine) to do tasks that require <insert property of general intelligence>, I think it would be cheaper to hire humans, up to a point. This is partly because, until AGI is commercially viable, the process of developing and maintaining AI systems necessarily involves human labor. Machine intelligence scales because computation does, but I think it would be unlikely to scale enough to make machine labor more cost-effective than human labor in all cases. I do think that AGI depressing human wages to the point of mass unemployment is a tail risk that society should watch for, and that it would lead to humans losing control of society through enfeeblement, but I don't think it's a necessary outcome of further AI development.
Is Democracy a Fad?

Thanks for sharing this, Nathan! Very interesting graph (and a metric I haven't ever thought to consider.)

I'm curious if you have any views on what we should take away from trends in "the portion of output produced by democracies" vs. "the portion of people living under democracy" vs. "the portion of states that are democratic."

Am I right to think that "portion of output produced by democracies" is most useful as a measure of the global power/influence of democracies? If so, that does seem like an interesting trend to track. I could also imagine it being i... (read more)

Is Democracy a Fad?

So that makes it sound like we might want to aim for good post-human/transhuman scenarios (if aiming for the good versions specifically is relatively tractable), or for good scenarios in which something non-human is very much in control (like developing a friendly agential AI).

I'm not sure if that follows. I mainly think that the meaning of the question "Will the future be democratic?" becomes much less clear when applied to fully/radically post-human futures. But I'm not sure if I see a natural reason to think that the futures would be 'politically bet... (read more)

3MichaelA10moIt sounds like you mainly have in mind something akin to preference aggregation. It seems to me that a similarly important benefit might be that democracies are likely more conducive to a free exchange of ideas/perspectives and to people converging on more accurate ideas/perspectives over time. (I have in mind something like the marketplace of ideas [https://en.wikipedia.org/wiki/Marketplace_of_ideas] concept. I should note that I'm very unsure how strong those effects are, and how contingent they are on various features of the present world which we should expect to change in future.) Did you mean for your comment to imply that idea as well? In any case, do you broadly agree with that idea?
2MichaelA10moInteresting, thanks! I think those points broadly make sense to me. I think this is a good point, but I also think that: 1. The use of the term "dystopia" without clarification is probably not ideal 2. A future that's basically like the current-day Hanoi everywhere forever is very plausibly an existential catastrophe (given Bostrom/Ord's definitions and some plausible moral and empirical views) * (This is a very different claim from "Hanoi is supremely awful by present-day standards", or even "I'd hate to live in Hanoi myself") 3. In my previous comment, I intended for things like "current-day Hanoi everywhere forever" to be potentially included as among the failure modes I'm concerned about To expand on those claims a bit: When I use the term “dystopia”, I tend to essentially have in mind what Ord (2020) [https://theprecipice.com/] calls “unrecoverable dystopia”, which is one of his three types of existential catastrophe, along with extinction and unrecoverable dystopia. And he defines an existential catastrophe in turn as “the destruction of humanity’s longterm potential.” So I think the simplest description of what I mean by the term "unrecoverable dystopia" would be "a scenario in which civilization will continue to exist, but it is now guaranteed that the vast majority of the value that previously was attainable will never be attained".[1] (See also Venn diagrams of existential, global, and suffering catastrophes [https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering] and Clarifying existential risks and existential catastrophes [ https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes] .) So this wouldn't require that the average sentient being has a net-negative life, as long as it's possible that something far better could've happened but now is guaranteed to not happen. And it more clearly wo
Is Democracy a Fad?

Are you seeing this prediction as including scenarios in which TAI has been developed by then, but things are basically going well, at least one million beings roughly like humans still exist, and the TAI is either agential and well-aligned with humanity and deferring to our wishes[1] or CAIS-like / tool-like?

Yep! I'm including these scenarios in the prediction.

I suppose I'm conditioning on either:

(a) AI has already been truly transformative, but people are still around and still meaningfully responsible for some important political decisions.*

(b) AI ha... (read more)

3MichaelA10moInteresting, thanks. So now I'm thinking that maybe your prediction, if accurate, is quite concerning. It sounds like you believe the future could take roughly the following forms: 1. There are no longer really people or they no longer really govern themselves. Subtypes: 1. Extinction 2. A post-human/transhuman scenario (could be good or bad) 3. There are still essentially people, but something else is very much in control (probably AI; could be either aligned or misaligned with what we do/should value) 2. There are still essentially people and they still govern themselves, and there's "something like a 4-in-5 chance that the portion of people living under a proper democracy will be substantially lower than it is today" * That sounds to me like a 4-in-5 chance of something that might probably itself be an existential catastrophe (global authoritarianism that lasts indefinitely long), or might substantially increase the chances of some other existential catastrophe (e.g., because it's harder to have a long reflection and so bad values get locked in) So that makes it sound like we might want to aim for good post-human/transhuman scenarios (if aiming for the good versions specifically is relatively tractable), or for good scenarios in which something non-human is very much in control (like developing a friendly agential AI). But maybe you don't see possibility 2 as necessarily that concerning? E.g., maybe you think that something like mild or genuinely enlightened and benevolent authoritarianism accounts for a substantial part of the likelihood of authoritarianism? (Also, I'm aware that, as you emphasise, the "4-in-5" claim shouldn't be taken too seriously. I'm sort of using it as a springboard for thought - something like "If the rough worldview that tentatively generated that probability turned out to be totally correct, how concerned should I be and what futu
Is Democracy a Fad?

Hi Michael, I think this is a great comment! I would be really interested in a rough 'civilizational trends database' or anything that could help clarify what a sensible prior for social trend persistence would be.

I'm not exactly sure how this would work, but one trick might be to pick a few well-document times/regions in world history and try to log trends that historians think are worth remarking on. For example, for the late Roman Empire, the 'religious trends' subset of the database would include both the rise of Christianity (ultra-robust) and the ris... (read more)

7HaydnBelfield10moI think the closest things we've got that's similar to this are: Luke Muehlhauser's work on 'amateur macrohistory' https://lukemuehlhauser.com/industrial-revolution/ [https://lukemuehlhauser.com/industrial-revolution/] The (more academic) Peter Turchin's Seshat database: http://seshatdatabank.info/ [http://seshatdatabank.info/]
Is Democracy a Fad?

I would actually bet on average democracy continuing to increase over the next few decades.* Over this timespan, I'm still pretty inclined to extrapolate the rising trend forward, rather than updating very much on the past decade or so of possible backsliding. It also seems relevant that many relatively poorer and less democratic countries are continuing to develop, supposing that development actually is an important factor in democratization.

I also don't think there are any signs that automation is already playing a major role in democratic backsliding. (... (read more)

Is Democracy a Fad?

So it's not obvious to me that there will be any positive length window of time between full automation and the end of human supremacy.

I agree with this -- and agree I probably should have emphasized this caveat more!

The critical thing, in my mind, is whether humans (or something in that ballpark) are still largely governing themselves. This is consistent with broadly superhuman AI capabilities existing. For example, on a CAIS-like development trajectory, these superhuman AI capabilities might not even (for the most part) be embedded in very agential sy... (read more)

4MichaelA10moI had a question that I think is semi-related to this thread, regarding your prediction: Are you seeing this prediction as including scenarios in which TAI has been developed by then, but things are basically going well, at least one million beings roughly like humans still exist, and the TAI is either agential and well-aligned with humanity and deferring to our wishes[1] or CAIS-like / tool-like? I think I'd see those scenarios as fitting your described conditions. And I think I'd also see them as among the most likely picture of a good, non-existentially-catastrophic future.[2] So I wonder whether (a) you don't intend to be accounting for such scenarios, (b) you think they're much less likely relative to other good futures than I do, or (c) you think good futures are much less likely relative to bad ones than I do? A related uncertainty I have is what you mean by "individual people still at least sort of exist" in that quote. E.g., would you include whole brain emulations with a fairly similar mind design to current humans? [1] This could maybe be like a more extreme version of how the US President is "agential" and makes many of the actual decisions, but US citizens still in a substantial sense "govern themselves" because the president is partly acting based on their preferences. (Though obviously that's different in that there are checks and balances, elections, etc.) [2] I think the main alternatives would be: * somehow TAI is never developed, yet we can still fulfil our potential * humans changing into or being replaced by something very different * TAI is aligned with our idealised preferences at one point, and then just rolls with that, doing good things but not in any meaningful sense still being actively "governed by human-like beings" Caveat that I wrote this comment relatively quickly and think a lot of it is poorly operationalised and would benefit from better terminology.
Is Democracy a Fad?

Thanks for the reading list!

I looked into the backsliding literature just a bit and had the initial impression it wasn't as relevant for long-run and system-wide forecasting. A lot of the work seemed useful for forecasting whether a particular country might backslide (e.g. how large a risk-factor is Trump in the US or Modi in India?), or for making medium-term extrapolations (e.g. has backsliding become more common over the past decade?). But I didn't see as clear of a way to use it to make long-run system-level predictions.

The point that democratic instit... (read more)

I would say more optimistic. I think there's a pretty big difference between emergence (a shift from authoritarianism to democracy) - and democratic backsliding, that is autocratisation (a shift from democracy to authoritarianism). Once that shift has consolidated, there's lots of changes that makes it self-reinforcing/path-dependent: norms and identities shift, economic and political power shifts, political institutions shift, the role of the military shifts. Some factors are the same for emergence and persistence, like wealth/growth, but some aren't (whi... (read more)

Books / book reviews on nuclear risk, WMDs, great power war?

I’ve read one book focused on trends and drivers of violence more generally, with some parts on/relevance to great power war: This is of course Better Angels of Our Nature.

I would recommend Only the Dead and The Causes of War and the Spread of Peace over Better Angels.

Only the Dead is basically a pretty effective take-down of Pinker's analysis of trends in interstate war. Some key points are: (i) Pinker focuses on wars between European states, or wars between (typically European) "great powers," rather than interstate war generally. (ii) Pinker doesn't... (read more)

3MichaelA1yHey Ben, thanks for those recommendations! I hadn't heard of them, and both sound interesting and potentially useful. I've now downloaded Only the Dead, and made a note to maybe read The Causes of War and the Spread of Peace after that.
Linch's Shortform

If he hasn't seriously considered working on supervolcanoes before, then it definitely seems worth raising the idea with him.

I know almost nothing about supervolcanoes, but, assuming Toby's estimate is reasonable, I wouldn't be too surprised if going from zero to one longtermist researcher in this area is more valuable than adding an additional AI policy researcher.

Does Economic History Point Toward a Singularity?

The world GDP growth rate also seems to have been increasing during the immediate lead-up to the Industrial Revolution, as well as during the following century, although the exact numbers are extremely uncertain. The growth rate most likely stabilized around the middle of the 20th century.

Does Economic History Point Toward a Singularity?

The growth rate of output per person definitely has been roughly constant in developed countries (esp. the US) in the 20th century. In the doc, I'm instead talking about the growth rate of total output, globally, from about 1600 to 1950.

(So the summary may give the wrong impression. I ought to have suggested a tweak to make it clearer.)

1Michael_Wiebe1yRight, growth(GDP) > growth(GDP per capita) when growth(population)>0.
Does Economic History Point Toward a Singularity?

Addendum:

In the linked doc, I mainly contrast two different perspectives on the Industrial Revolution.

  • Stable Dynamics: The core dynamics of economic growth were stable between the Neolithic Revolution and the 20th century. Growth rates increased substantially around the Industrial Revolution, but this increase was nothing new. In fact, growth rates were generally increasing throughout this lengthy period (albeit in a stochastic fashion). The most likely cause for the upward trend in growth rates was rising population levels: larger populations could com

... (read more)
2Michael_Wiebe1yOne version of the phase change model that I think is worth highlighting: S-curve growth. Basically, the set of transformative innovations is finite, and we discovered most of them over the past 200 years. Hence, the Industrial Revolution was a period of fast technological growth, but that growth will end as we run out of innovations.The hockey-stick graph will level out and become an S-curve, asg→0.
Does Economic History Point Toward a Singularity?

Actually, I believe the standard understanding of "technology" in economics includes institutions, culture, etc.--whatever affects how much output a society wrings from a given input. So all of those are by default in Kremer's symbol for technology, A. And a lot of those things plausibly could improve faster, in the narrow sense of increasing productivity, if there are more people, if more people also means more societies (accidentally) experimenting with different arrangements and then setting examples for others; or if such institutional innovations are

... (read more)
Does Economic History Point Toward a Singularity?

Hi David,

Thank you for this thoughtful response — and for all of your comments on the document! I agree with much of what you say here.

(No need to respond to the below thoughts, since they somehow ended up quite a bit longer than I intended.)

Kahneman and Tversky showed that incorporating perspectives that neglect inside information (in this case the historical specifics of growth accelerations) can reduce our ignorance about the future--at least, the immediate future. This practice can improve foreseight both formally--leading experts

... (read more)

I agree with much of this. A few responses.

As I see it, there are a couple of different reasons to fit hyperbolic growth models — or, rather, models of form (dY/dt)/Y = aY^b + c — to historical growth data.
...

I think the distinction between testing a theory and testing a mathematical model makes sense, but the two are intertwined. A theory will tend naturally to to imply a mathematical model, but perhaps less so the other way around. So I would say Kremer is testing both a theory and and model—not confined to just one side of that di... (read more)

Asking for advice

I would also like to come out of the woodwork as someone who finds Calendly vaguely annoying, for reasons that are entirely opaque to me.

(Although it's also unambiguously more convenient for me when people send me Calendly links -- and, given the choice, I think I'd mostly like people to keep doing this.)

8Stefan_Schubert1yMaybe one option would be to both send the Calendly and write a more standard email? E.g.: "When would suit you? How about Tuesday 3pm or Wednesday 4pm? Alternatively, you could check my Calendly, if you prefer." Maybe some find that overly roundabout.
Does Economic History Point Toward a Singularity?

If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.

I believe the population surge is closely related to the European population surge: it's largely attributed to the Colombian exchange + expanded markets/trade. One of the biggest things is that there's an expansion in the land under cultivation, since potatoes and maize can be grown on marginal land that wouldn't otherwise work well for rice... (read more)

7Paul_Christiano1yThanks, super helpful. (I don't really buy an overall take like "It seems unlikely" but it doesn't feel that mysterious to me where the difference in take comes from. From the super zoomed out perspective 1200 AD is just yesterday from 1700AD, it seems like random fluctuations over 500 years are super normal and so my money would still be on "in 500 years there's a good chance that China would have again been innovating and growing rapidly, and if not then in another 500 years it's reasonably likely..." It makes sense to describe that situation as "nowhere close to IR" though. And it does sound like the super fast growth is a blip.)
Does Economic History Point Toward a Singularity?

My sense of that comes from: (i) in growth numbers people usually cite, Europe's growth was absurdly fast from 1000AD - 1700AD (though you may think those numbers are wrong enough to bring growth back to a normal level) (ii) it seems like Europe was technologically quite far ahead of other IR competitors.

I'm curious about your take. Is it that:

  • The world wasn't yet historically exceptional by 1700, there have been other comparable periods of rapid progress. (What are the historical analogies and how analogous do you think they are? Is my impression of t

... (read more)

If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.

If you are trying to model things at the level that Roodman or I are, the difference between 1400 and 1600 just isn't a big deal, the noise terms are on the order of 500 years at that point.

So maybe the interesting question is if and why scholars think that China wouldn't have had an IR shortly after Europe (i.e. within a few cen... (read more)

Does Economic History Point Toward a Singularity?

Thanks for the feedback! I probably ought to have said more in the summary.

Essentially:

  • For the 'old data': I run a non-linear regression on the population growth rate as a function of population, for a dataset starting in 10000BC. The function is (dP/dt)/P = a*P^b, where P represents population. If b = 0, this corresponds to exponential growth. If b = 1, this corresponds to the strict version of the Hyperbolic Growth Hypothesis. If 0 < b < 1, this still corresponds to hyperbolic growth, although the growth rate is less than proportional to the pop

... (read more)
Does Economic History Point Toward a Singularity?

Economic histories do tend to draw casual arrows between several of these differences, sometimes suggesting a sort of chain reaction, although these narrative causal diagrams are admittedly never all that satisfying; there’s still something mysterious here.

Just to make this more concrete:

One example of an IR narrative that links a few of these changes together is Robert Allen's. To the extent that I understand/remember it, the narrative is roughly: The early modern expansion of trade networks caused an economic boom in England, especially in textile man... (read more)

Does Economic History Point Toward a Singularity?

I also pretty strongly have this intuition: the Kremer model, and the explanation it gives for the Industrial Revolution, is in tension with the impressions I've formed from reading the great divergence literature.

Although, to echo Max's comment, you can 'believe' the Kremer model without also thinking that an 18th/19th century Industrial Revolution was inevitable. It depends on how much noise you allow.

One of the main contributions in David Roodman's recent report is to improve our understanding of how noise/stochasticity can result in pretty different-lo... (read more)

6Paul_Christiano1yI think Roodman's model implies a standard deviation of around 500-1000 years for IR timing starting from 1000AD, but I haven't checked. In general for models of this type it seems like the expected time to singularity is a small multiple of the current doubling time, with noise also being on the order of the doubling time. The model clearly underestimates correlations and hence the variance here---regardless of whether we go in for "2 revolutions" or "randomly spread out" we can all agree that a stagnant doubling is more likely to be followed by another stagnant doubling and vice versa, but the model treats them as independent. This seems to suggest there are lots of civilizations like Europe-in-1700. But it seems to me that by this time (and so I believe before the Americas had any real effect) Europe's state of technological development was already pretty unprecedented. This is lot of what makes many of the claims about "here's why the IR happened" seem dubious to me. My sense of that comes from: (i) in growth numbers people usually cite, Europe's growth was absurdly fast from 1000AD - 1700AD (though you may think those numbers are wrong enough to bring growth back to a normal level) (ii) it seems like Europe was technologically quite far ahead of other IR competitors. I'm curious about your take. Is it that: * The world wasn't yet historically exceptional by 1700, there have been other comparable periods of rapid progress. (What are the historical analogies and how analogous do you think they are? Is my impression of technological sophistication wrong?) * 1700s Europe is quantitatively exceptional by virtue of being the furthest along example, but nevertheless there is a mystery to be explained about why it became even more exceptional rather than regressing to the mean (as historical exceptional-for-their-times civilizations had in the past). I don't currently see a mystery about this (given the level of noise in Roodman's model,
Does Economic History Point Toward a Singularity?

So to me it feels like as we add random stuff like "yeah there are revolutions but we don't have any prediction about what they will look like" makes the richer model less compelling. It moves me more towards the ignorant perspective of "sometimes acceleration happens, maybe it will happen soon?", which is what you get in the limit of adding infinitely many ex ante unknown bells and whistles to your model.

I agree the richer stories, if true, imply a more ignorant perspective. I just think it's plausible that the more ignorant perspective is the correct ... (read more)

6Paul_Christiano1yIt feels like you are drawing some distinction between "contingent and complicated" and "noise." Here are some possible distinctions that seem relevant to me but don't actually seem like disagreements between us: * If something is contingent and complicated, you can expect to learn about it with more reasoning/evidence, whereas if it's noise maybe you should just throw up your hands. Evidently I'm in the "learn about it by reasoning" category since I spend a bunch of time thinking about AI forecasting. * If something is contingent and complicated, you shouldn't count on e.g. the long-run statistics matching the noise distribution---there are unmodeled correlations (both real and subjective). I agree with this and think that e.g. the singularity date distributions (and singularity probability) you get out of Roodman's model are not trustworthy in light of that (as does Roodman). So it's not super clear there's a non-aesthetic difference here. If I was saying "Growth models imply a very high probability of takeoff soon" then I can see why your doc would affect my forecasts. But where I'm at from historical extrapolations is more like "maybe, maybe not"; it doesn't feel like any of this should change that bottom line (and it's not clear how it would change that bottom line) even if I changed my mind everywhere that we disagree. "Maybe, maybe not" is still a super important update from the strong "the future will be like the recent past" prior that many people implicitly have and I might otherwise take very seriously. It also leads me to mostly dismiss arguments like "this is obviously not the most important century since most aren't." But it mostly means that I'm actually looking at what is happening technologically. You may be responding to writing like this short post [https://sideways-view.com/2017/10/04/hyperbolic-growth/] where I say "We have been in a period of slowing growth for the last forty years. That’s a long time, but looking
7Ben Garfinkel1yJust to make this more concrete: One example of an IR narrative that links a few of these changes together is Robert Allen's [https://voxeu.org/article/why-was-industrial-revolution-british] . To the extent that I understand/remember it, the narrative is roughly: The early modern expansion of trade networks caused an economic boom in England, especially in textile manufacturing. As a result, wages in England became unusually high. These high wages created unusually strong incentives to produce labor-saving technology. (One important effect of the Malthusian conditions is that they make labor dirt cheap.) England, compared to a few other countries that had similarly high wages at other points in history, also had access to really unusually cheap energy; they had huge and accessible coal reserves, which they were already burning as a replacement for wood. The unusually high levels of employment in manufacturing and trade also supported higher levels of literacy and numeracy. These conditions came together to support the development of technologies for harnessing fossil fuels, in the 19th century, and the rise of intensive R&D; these may never have been economically rational before. At this point, there was now a virtuous cycle that allowed England's growth -- which was initially an unsustainable form of growth based on trade, rather than technological innovation -- to become both sustained and innovation-driven. The spark then spread to other countries. This particular tipping point story is mostly a story about why growth rates increased from the 19th century onward, although the growth surge in the previous few centuries, largely caused by the Colombian exchange and expansion of trade networks, still plays an important causal role; the rapid expansion of trade networks drives British wages up and makes it possible for them to profitably employ a large portion of their population in manufacturing.
Does Economic History Point Toward a Singularity?

Also want to second this! (This is a far more extensive response and summary than I've seen on almost any EA forum post.)

Does Economic History Point Toward a Singularity?

Hi Paul,

Thanks for your super detailed comment (and your comments on the previous version)!

You are basically comparing "Series of 3 exponentials" to a hyperbolic growth model. I think our default simple hyperbolic growth model should be the one in David Roodman's report (blog post), so I'm going to think about this argument as comparing Roodman's model to a series of 3 noisy exponentials.

I think that Hanson's "series of 3 exponentials" is the neatest alternative, although I also think it's possible that pre-modern growth looked pretty different from cl... (read more)

I think that Hanson's "series of 3 exponentials" is the neatest alternative, although I also think it's possible that pre-modern growth looked pretty different from clean exponentials (even on average / beneath the noise). There's also a semi-common narrative in which the two previous periods exhibited (on average) declining growth rates, until there was some 'breakthrough' that allowed the growth rate to surge: I suppose this would be a "three s-curve" model. Then there's the possibility that the growth pa
... (read more)
Does Economic History Point Toward a Singularity?

Thanks for the clarifying comment!

I'd hoped that effective population size growth rates might be at-least-not-completely-terrible proxies for absolute population size growth rates. If I remember correctly, some of these papers do present their results as suggesting changes in absolute population size, but I think you're most likely right: the relevant datasets probably can't give us meaningful insight into absolute population growth trends.

Does Economic History Point Toward a Singularity?

I should have been clearer in the summary: the hypothesis refers to the growth rate of total economic output (GDP) rather than output-per-person (GDP per capita). Output-per-person is typically thought to have been pretty stagnant until roughly the Industrial Revolution, although just how stagnant it was is controversial. Total output definitely did grow substantially, though.

What l'm calling the Hyperbolic Growth Hypothesis is at least pretty mainstream. Michael Kremer's paper is pretty classic (it's been cited about 2000 times) and some growth theory textbooks repeat its main claim. Although I don't have a great sense of exactly how widely accepted it is.

Load More