Ben Garfinkel

Ben Garfinkel - Researcher at Future of Humanity Institute

Comments

Ben Garfinkel's Shortform

I’d actually say this is a variety of qualitative research. At least in the main academic areas I follow, though, it seems a lot more common to read and write up small numbers of detailed case studies (often selected for being especially interesting) than to read and write up large numbers of shallow case studies (selected close to randomly).

This seems to be true in international relations, for example. In a class on interstate war, it’s plausible people would be assigned a long analysis of the outbreak WW1, but very unlikely they’d be assigned short descriptions of the outbreaks of twenty random wars. (Quite possible there’s a lot of variation between fields, though.)

Ben Garfinkel's Shortform

In general, I think “read short descriptions of randomly sampled cases” might be an underrated way to learn about the world and notice issues with your assumptions/models.

A couple other examples:

I’ve been trying to develop a better understanding of various aspects of interstate conflict. The Correlates of War militarized interstate disputes (MIDs) dataset is, I think, somewhat useful for this. The project files include short descriptions of (supposedly) every case between 1993 and 2014 in which one state “threatened, displayed, or used force against another.” Here, for example, is the set of descriptions for 2011-2014. I’m not sure I’ve had any huge/concrete take-aways, but I think reading the cases: (a) made me aware of some international tensions I was oblivious to; (b) gave me a slightly better understanding of dynamics around ‘micro-aggressions’ (e.g. flying over someone’s airspace); and (c) helped me more strongly internalize the low base rate for crises boiling over into war (since I disproportionately read about historical disputes that turned into something larger).

Last year, I also spent a bit of time trying to improve my understanding of police killings in the US. I found this book unusually useful. It includes short descriptions of every single incident in which an unarmed person was killed by a police officer in 2015. I feel like reading a portion of it helped me to quickly notice and internalize different aspects of the problem (e.g. the fact that something like a third of the deaths are caused by tasers; the large role of untreated mental illness as a risk factor; the fact that nearly all fatal interactions are triggered by 911 calls, rather than stops; the fact that officers are trained to interact importantly differently with people they believe are on PCP; etc.). l assume I could have learned all the same things by just reading papers — but I think the case sampling approach was probably faster and better for retention.

I think it's possible there might be value in creating “random case descriptions” collections for a broader range of phenomena. Academia really doesn’t emphasize these kinds of collections as tools for either research or teaching.

EDIT: Another good example of this approach to learning is Rob Besinger's recent post "thirty-three randomly selected bioethics papers."

Ben Garfinkel's Shortform

The O*NET database includes a list of about 20,000 different tasks that American workers currently need to perform as part of their jobs. I’ve found it pretty interesting to scroll through the list, sorted in random order, to get a sense of the different bits of work that add up to the US economy. I think anyone who thinks a lot about AI-driven automation might find it useful to spend five minutes scrolling around: it’s a way of jumping yourself down to a lower level of abstraction. I think the list is also a little bit mesmerizing, in its own right.

One update I’ve made is that I’m now more confident that more than half of present-day occupational tasks could be automated using fairly narrow, non-agential, and boring-looking AI systems. (Most of them don’t scream “this task requires AI systems with long-run objectives and high levels of generality.”) I think it’s also pretty interesting, as kind of a game, to try to imagine as concretely as possible what the training processes might look like for systems that can perform (or eliminate the need for) different tasks on the list.

As a sample, here are ten random tasks. (Some of these could easily be broken up into a lot of different sub-tasks or task variants, which might be automated independently.)

  • Cancel letter or parcel post stamps by hand.
  • Inquire into the cause, manner, and circumstances of human deaths and establish the identities of deceased persons.
  • Teach patients to use home health care equipment.
  • Write reports or articles for Web sites or newsletters related to environmental engineering issues.
  • Supervise and participate in kitchen and dining area cleaning activities.
  • Intervene as an advocate for clients or patients to resolve emergency problems in crisis situations.
  • Mark or tag material with proper job number, piece marks, and other identifying marks as required.
  • Calculate amount of debt and funds available to plan methods of payoff and to estimate time for debt liquidation.
  • Weld metal parts together, using portable gas welding equipment.
  • Provide assistance to patrons by performing duties such as opening doors and carrying bags.
Is Democracy a Fad?

Thanks for the comment!

I think endnotes 12 and 13, within my cave of endnotes, may partly address this concern.

I don't think the prediction that the labor share will fall in the future depends on (a) the assumption that the amount of work to be done in the economy is constant, (b) the assumption that automation is currently reducing the demand for labor, or (c) the assumption that individual AI systems will tend to have highly general capabilities. I do agree that the first two assumptions are wrong. I also think the third assumption is very plausibly wrong, in line with some of the analysis in Reframing Superintelligence.

I think the prediction only depends on the assumption that, in the future, it will become unnecessary (and comparatively more expensive) to hire humans workers to produce goods and services. I find this assumption really plausible. The human brain is ultimately just a physical thing, so there's no fundamental physical reason why (at least in aggregate) human-made machines couldn't perform all of the same tasks that the brain is capable of.[1] I also think it's likely that engineers will eventually be able to make these kinds of machines; seemingly, the vast majority of AI researchers expect this to happen eventually. There are also, I think, very strong economic incentives to make and use these machines. If a business or state can produce goods and services more cheaply or effectively, by escaping the need to hire human workers, then it will typically want to do this. Any group that continues to pay for a lot of unnecessary human workers will be at a disadvantage.

This prediction is consistent with the observation that, historically, automation has tended to increase overall demand for labor. When one domain becomes highly automated, this tends to increase the demand for labor in complementary domains (inc. domains that did not previously exist) which are not highly automated. My understanding is that this dynamic explains why automation has mainly been driving wages up for the past couple hundred years. But the dynamic seems to break down once there are no longer any complementary automation-resistant domains.

For example: Suppose we live in a cheese-and-bread economy. People like eating cheese sandwiches, but don't like eating cheese on its own. It then seems like completely automating cheese production (using machines that are more efficient than humans) will tend to increase demand for workers to staff bread factories. Automating both cheese and bread production, though, seems like it would pretty much eliminate the demand for labor. If either factory has an extra ten thousand dollars to spare, then (seemingly) they have no incentive to use it to pay a human worker a living wage, rather than spending it on capital that will increase output by a larger amount.[2]

My thought process here is largely based on my memory of this paper and this paper. I'm not an economist, though, so I'm curious whether you or anyone else reading this comment thinks there's a significant gap/mistake in this analysis.


  1. As a caveat, in some cases, people might intrinsically prefer for certain goods or services to be provided by humans. For example, people might naturally prefer to watch human athletes, talk to human therapists, listen to sermons by human religious leaders, etc. Human labor could also become a kind of status good in its own right; paying people do things could be sort of the future equivalent of buying rare paintings or NFTs. As a more direct and ominous analogy, my impression is that slaves used to be a really common status/luxury good for elites in lots of different parts of the world; maybe free human workers could play a similar social role in the future.

    This would prevent the labor share from going to zero, even if AI systems can (at the physical level) do everything that human workers can do. But I'd find it kind of surprising if this kind of work was enough to maintain very high labor force participation. It also seems like, if all remaining work was in this category, then we should still be worried about democracy. If military operations, law enforcement, the production of nearly all physical stuff, etc., were all highly and effectively automated, then that would still seem to undercut a lot of the hypothesized economic basis for democracy. ↩︎

  2. I don't think comparative advantage arguments ultimately help here. At the same time, though, I also don't feel like I have a great grasp of how to apply them to capital-labor substitution. ↩︎

Is Democracy a Fad?

Thanks for sharing this, Nathan! Very interesting graph (and a metric I haven't ever thought to consider.)

I'm curious if you have any views on what we should take away from trends in "the portion of output produced by democracies" vs. "the portion of people living under democracy" vs. "the portion of states that are democratic."

Am I right to think that "portion of output produced by democracies" is most useful as a measure of the global power/influence of democracies? If so, that does seem like an interesting trend to track. I could also imagine it being interesting to look at secondary metrics of national power, if you haven't already. For example, I think some IR scholars argue for the use of GDP multiplied by GDP-per-capita, based on the intuition that poor-but-highly-populated countries (e.g. Indonesia) seem to have less global power than their GDPs would suggest. You're also probably already familiar with this sort of unprincipled metric of "national material capabilities" that international relations people sometimes use. Although my guess is that the trends would probably look pretty similar.

It seems like "portion of output produced by democracies" also functions as a combined metric of the prevalence of democracy, the strength of the development/democracy correlation, and the weakness of the (I think slightly negative?) population/democracy correlation. I suppose it's a bad sign for democracy if any of these components decrease.

[[Edit: One more thought. If you haven't already done this, it might also be interesting to look at trends in Polity-score-weighted GDP as a more continuous measure of the financial power of democracy. I think the trend would probably look about the same, since China's polity score has been pretty stable over time, but there's some chance it'd be interestingly different. I might also just do myself, out of curiosity.]]

Is Democracy a Fad?

So that makes it sound like we might want to aim for good post-human/transhuman scenarios (if aiming for the good versions specifically is relatively tractable), or for good scenarios in which something non-human is very much in control (like developing a friendly agential AI).

I'm not sure if that follows. I mainly think that the meaning of the question "Will the future be democratic?" becomes much less clear when applied to fully/radically post-human futures. But I'm not sure if I see a natural reason to think that the futures would be 'politically better' than futures that are more recognizably human. So, at least at the moment, I'm not inclined to treat this as a major reason to push for a more or less post-human future.

That sounds to me like a 4-in-5 chance of something that might probably itself be an existential catastrophe (global authoritarianism that lasts indefinitely long), or might substantially increase the chances of some other existential catastrophe (e.g., because it's harder to have a long reflection and so bad values get locked in).... But maybe you don't see [this possibility] as necessarily that concerning? E.g., maybe you think that something like mild or genuinely enlightened and benevolent authoritarianism accounts for a substantial part of the likelihood of authoritarianism?

On the implications of my prediction for future people:

I definitely think of my prediction as, at least, bad news for future people. I'm a little unsure exactly how bad the news is, though.

Democratic governments are currently, on average, much better for the people who live under them. It's not always possible to be totally sure of causation, but massacres, famines, serious suppressions of liberties, etc., have clearly been much more common under dictatorial governments than democratic governments. There are also pretty basic reasons to expect democracies to typically better for the people under them: there's a stronger link between government decisions and people's preferences. I expect this logic to hold, even if a lot of the specific ways in which dictatorships are on average worse than democracies (like higher famine risk) become less relevant in the future.

At the same time, I'm not sure we should be imagining a dystopia. Most people alive today live under dictatorial governments, and, for most of these people, daily life doesn't feel like a boot on the face. The average person in Hanoi, for example, doesn't think of themselves as living in the midst of catastrophe. Growing prosperity and some forms of technological progress are also reasons to expect quality of life to go up over time, even if the political situation deteriorates.

So I just want to clarify that, even though I'm predicting a counterfactually worse outcome, I'm not necessarily predicting a dystopia for most people, or a scenario in which most people's lives are net negative. A dystopian future is conceivable, but doesn't necessarily follow from a lack of democracy.

On the implications of my prediction for "value lock-in," more broadly:

I think the main benefit of democracy, in this case, is that we should probably expect a wider range of values to be taken into account when important decisions with long-lasting consequences are made. Inclusiveness and pluralism of course doesn't always imply morally better outcomes. But moral uncertainty considerations probably push in the direction of greater inclusivity/pluralism being good, in expectation. From some perspectives, it's also inherently morally valuable for important decisions to be made in inclusive/pluralistic ways. Finally, I expect the average dictator to have worse values than the average non-dictator.

I actually haven't thought very hard about the implications of dictatorship and democracy for value lock-in, though. I think I also probably have a bit of a reflexive bias toward democracy here.

Is Democracy a Fad?

Are you seeing this prediction as including scenarios in which TAI has been developed by then, but things are basically going well, at least one million beings roughly like humans still exist, and the TAI is either agential and well-aligned with humanity and deferring to our wishes[1] or CAIS-like / tool-like?

Yep! I'm including these scenarios in the prediction.

I suppose I'm conditioning on either:

(a) AI has already been truly transformative, but people are still around and still meaningfully responsible for some important political decisions.*

(b) AI hasn't yet been truly transformative , but people haven't gone extinct

I actually haven't thought enough about the relative probability of these two cases or my actual conditional probabilities for each of them. So my "4-in-5" prediction shouldn't be taken as very rigorously thought through. I think the outside view is relevant to both cases, but the automation argument is only very relevant to the first case.

*I agree with your analogy here: People might be "meaningfully responsible" in the same way that US citizens are "meaningfully responsible" for US government actions, even though they only provide very occasional and simple inputs.

A related uncertainty I have is what you mean by "individual people still at least sort of exist" in that quote. E.g., would you include whole brain emulations with a fairly similar mind design to current humans?

I'm a little torn here. I've gone back and forth on this point, but haven't really settled on how much including emulations should or should not influence the prediction. (Another sign that my "4-in-5" shouldn't be taken too seriously.)

If whole brains emulations have largely replaced regular biological people, and mostly aren't doing work (because other AI systems can do better jobs for most relevant cognitive tasks), then the automation argument still applies. But we should also assume, if we're talking about emulations, that there have been an incredible number of other changes, some of which might be much more relevant than the destruction of the value of labor. For example, surely the ability to make copies of an emulation has implications for the nature of voting.

So, although I still feel that automation pushes in the direction of dictatorship, in the emulation case, I do feel a bit silly making mechanistic or "inside view" arguments given how foreign this possible future is to us. I also think the outside view continues to be relevant. At the same time, though, there might be a somewhat stronger case for just throwing up our hands and beginning from a non-informative 50/50 prior instead of trying to think too hard about base rates.

Is Democracy a Fad?

Hi Michael, I think this is a great comment! I would be really interested in a rough 'civilizational trends database' or anything that could help clarify what a sensible prior for social trend persistence would be.

I'm not exactly sure how this would work, but one trick might be to pick a few well-document times/regions in world history and try to log trends that historians think are worth remarking on. For example, for the late Roman Empire, the 'religious trends' subset of the database would include both the rise of Christianity (ultra-robust) and the rise of Sol Invictus worship (not nearly as robust). Although, especially for older periods, shorter-lived trends might be systematically under-discussed/under-recorded.

Is Democracy a Fad?

I would actually bet on average democracy continuing to increase over the next few decades.* Over this timespan, I'm still pretty inclined to extrapolate the rising trend forward, rather than updating very much on the past decade or so of possible backsliding. It also seems relevant that many relatively poorer and less democratic countries are continuing to develop, supposing that development actually is an important factor in democratization.

I also don't think there are any signs that automation is already playing a major role in democratic backsliding. (I think much more automation is probably necessary). So, unless there's really rapid AI progress, I don't expect the specific causal mechanism I'm nervous about to kick in for a while.

*Off the top of my head, conditional on the Polity project continuing to exist, I might say there's something like a 70% chance that the average country's Polity score is higher in 2050 than it is today.

Is Democracy a Fad?

So it's not obvious to me that there will be any positive length window of time between full automation and the end of human supremacy.

I agree with this -- and agree I probably should have emphasized this caveat more!

The critical thing, in my mind, is whether humans (or something in that ballpark) are still largely governing themselves. This is consistent with broadly superhuman AI capabilities existing. For example, on a CAIS-like development trajectory, these superhuman AI capabilities might not even (for the most part) be embedded in very agential systems.

But if humans just totally lose control, or become just totally unrecognizable, then I think the analysis really breaks down. At a certain point, it's hard even to understand what "democracy" would mean.

Even if there was a short positive window, it's also possible that status quo bias might carry democracy over, as political convergence on locally optimal policy seems to be a slow process at best (e.g. the long coexistence of Parliamentary and Presidential systems, or of North and South Korea).

I think that's a good point! The length of the window (and the gradualness of the transition to full automation) probably is very consequential.

Load More