Hide table of contents

I wrote this post July 8, 2023, but it seemed relevant to share here based on some comments that my entry in the AI Pause Debate got. 


As AI x-risk goes mainstream, lines are being drawn in the broader AI safety debate. One through-line is the disposition toward technology in general. Some people are wary even of AI-gone-right because they are suspicious of societal change, and they fear that greater levels of convenience and artificiality will further alienate us from our humanity. People closer to my own camp often believe that it is bad to interfere with technological progress and that Ludditism has been proven wrong because of all of the positive technological developments of the past. “Everyone thinks this time is different”, I have been told with a pitying smile, as if it were long ago proven that technology=good and the matter is closed. But technology is not one thing, and therefore “all tech” is not a valid reference class from which to forecast the future. This use of “technology” is a bucket error.

What is a bucket error?

A bucket error is when multiple different concepts or variables are incorrectly lumped together in one's mind as a single concept/variable, potentially leading to distortions of one's thinking.

(Source)

The term was coined as part of a longer post by Anna Salamon that included an example of a little girl who thinks that being a writer entails spelling words correctly. To her, there’s only one bucket for “being a writer” and “being good at spelling”.

“I did not!” says the kid, whereupon she bursts into tears, and runs away and hides in the closet, repeating again and again: “I did not misspell the word! I can too be a writer!”.

[…]

When in fact the different considerations in the little girl’s bucket are separable. A writer can misspell words.

Why is “technology” a false bucket?

Broadly, there are two versions of the false technology bucket out there: tech=bad and tech=good. Both are wrong.

Why? Simply put: “technology” is not one kind of thing.

Google-supplied definition of “technology”.

The common thread across the set of all technology is highly abstract (“scientific knowledge”, “applied sciences”— in other worlds, pertaining to our knowledge of the entire natural world), whereas concrete technologies themselves do all manner of things and can have effects that counteract each other. A personal computer is technology. Controlled fire is technology. A butterfly hair clip is technology. A USB-charging vape is technology. A plow is technology. “Tech” today is often shorthand for electronics and software. Some of this kind of technology, like computer viruses, are made to cause harm and violate people’s boundaries. But continuous glucose monitors are made to keep people with diabetes alive and improve their quality of life. It’s not that there are no broad commonalities across technologies— for example, they tend to increase our abilities— but that there aren’t very useful trends in whether “technology” as a whole is good or bad.

People who fear technological development often see technological progress as a whole as a move toward convenience and away from human self-reliance (and possibly into the hands of fickle new regimes or overlords). And I don’t think they are wrong— new tech can screw up our attention spans or disperse communities or exacerbate concentrated power. I just think they aren’t appreciating or are taking for granted how older technologies that they are used to having enhanced our lives, so much, so far, on balance, that I think the false bucket of “tech progress as a whole” has been worth the costs. But that doesn’t mean that new tech will always be worth the costs.

In fact, we have plenty of examples of successfully banned or restricted technologies like nuclear bombs and chemical weapons whose use we had every reason to suspect would represent change for the worse. The boosters of tech progress often forget to include these technologies in their parade of Luddite-embarrassing technological successes. Have bans on weapons of mass destruction held the world back? If not, shouldn’t that give the lie to the “technology=good” bucket? Sadly, “weapons” seem to be in a falsely separate bucket from technology for many who think this way.

What does this have to do with AI?

We don’t know what to think about AI. We don’t know when AGI is coming. We don’t know what will happen. Out of that ignorance, we attempt to compare the situation to situations we understand better, and many are falling back on their conflated beliefs about “technology” in general. Those beliefs may be negative or positive. More importantly to me, those beliefs about technology just aren’t that relevant. 

AI is, of course, technology. But I think it could just as accurately be called a “weapon” or, as AGI arrives, an “alien mind”.

Do those categories strike you as different buckets, with different implications? Does “weapon” or “alien mind” seem like a different reference class, leading to different predictions about how AI turns out for humanity?

If your instinct is to argue that, actually, AI is a technology and not a weapon or an alien mind (essentially, that the technology bucket is correct and AI belongs in it)— what does that move get you? Do you think it gets you a better reference class for forecasting? Some other predictive power?

Okay, now consider that AGI could well fit into all these references classes and more. Every time is a little bit different, but creating a new mind more intelligent than us could be very different indeed. There’s no rule that says “no time is actually different”, just like there’s no rule that says we’ll make it.

There is a place for looking at references classes, but I would argue that, in this case, that’s at a much finer level[1], and we must accept that in many ways we are in new territory. In response to concerns about risks from AI, I am sometimes told, essentially, “the world has never ended before”, which is both substantially false (the world has ended for the majority of species before us and human civilizations have collapsed many times) and fallacious— if the planet had been destroyed, we wouldn’t be looking back saying “well, the world only ended once before, but it has happened”.

We aren’t restricted to reasoning about large categories here. We can think about the specifics of this situation. I’ve had my disagreements with the modest epistemology concept, but it clearly applies here. We can just reason (on the object level) about how a more intelligent entity could fuck us up, whether those things could really happen or not, and try to prevent them instead of second-guessing ourselves and worrying that people who worried about superficially similar situations in the past looked stupid to future generations[2].

No matter what happens with AI, it won’t mean that technology was truly good or truly bad all along. That means we can’t use tech=good or tech=bad as a premise now to figure out what’s going to happen with AI just because it’s a kind of technology. 

(Standard disclaimer.)

  1. ^

    Some examples: 
    - When predicting capabilities: Other ML models, possibly mammalian cortex. 
    - When predicting benchmarks: Available supplies per time in previous years, precedents and case studies for various kinds of applicable regulations, looking at the fates of VC-funded tech start-ups.

  2. ^

    As I looked for links, I found out that the Luddites have been getting some recognition and vindication lately for a change.

33

1
0

Reactions

1
0

More posts like this

Comments10
Sorted by Click to highlight new comments since: Today at 8:38 PM

Strongly agree.

I think it's important to not perceive this error as one of individual failures of rationality, but one that is predictably ideological and cultural.

A position of pro-technology except for AI is a fairly idiosyncratic one to hold as it doesn't map onto standard ideologies and political fault lines.

A few general comments on this essay:

  • I mostly agree with it. I agree, for example, that "we can’t use tech=good or tech=bad as a premise now to figure out what’s going to happen with AI". We should instead be sensitive to the specific details of AI when assessing whether and how it will be a risk. Many technologies have been bad before, and religiously adhering a rule that "every technology is good" would be absurd.
  • However, I also think that a large fraction of people, plausibly the majority, have a pessimistic bias when it comes to new technologies -- often while using what was considered new last decade without hesitation. I agree with Bryan Caplan that many people seem motivated by a bizarre search for "dark linings in the silver clouds of business progress". If you agree with me that this bias exists, then perhaps you can sympathize with my guess that the bias also affects how people perceive AI risk.
  • I don't think it's irrational to use a general heuristic of "technological progress is good" as long as the heuristic can be overridden with sufficient evidence. Lots of things in life are like this. For example, I generally use the heuristic "lying is bad" even though I don't think lying is always bad. The reason for the heuristic is that it's usually true, and that seems like an important rule of thumb to follow in situations where we don't know all the consequences of our behavior. Even in situations where I think "maybe lying might actually be good here", the heuristic reminds me that I generally need strong evidence. Does that mean I'm suffering from a "dishonesty" bucket error by lumping all lies together in the same bin? I don't think so.

I agree to most of the above but I’m left more confused as to why you don’t already see AI as an exception to tech progress being generally good.

AIs could help us achieve what we want. We could become extremely wealthy, solve aging and disease, find ways of elevating well-being, maybe even solve wild animal suffering, and accelerate alternatives to meat. I'm concerned about s-risks and the possibility of severe misalignment, but I don't think either are default outcomes. I just haven't seen a good argument for why we'd expect these catastrophic scenarios under standard incentives for businesses. Unless you think that these risks are probable, why would you think AI is an exception to the general trend of technology being good?

Without getting into whether or not it's reasonable to expect catastrophe as the default under standard incentives for businesses, I think it's reasonable to hold the view that AI is probably going to be good while still thinking that the risks are unacceptably high.

If you think the odds of catastrophe are 10% — but otherwise think the remaining 90% is going to lead to amazing and abundant worlds for humans — you might still conclude that AI doesn't challenge the general trend of technology being good.

But I think it's also reasonable to conclude that 10% is still way too high given the massive stakes and the difficulty involved with trying to reverse/change course, which is disanalogous with most other technologies. IMO, the high stakes + difficulty of changing course is sufficient enough to override the "tech is generally good" heuristic.

I also think existential risk from AI is way too high. That's why I strongly support AI safety research, careful regulation and AI governance. I'm objecting to the point about whether AI should be seen as an exception to the rule that technology is good. In the most probable scenario, it may well be the best technology ever!

100% agree, that's my view. 80+% chance of being good (on a spectrum of good too, not just utopia good), but unacceptably high risk of being bad. And within that remaining 20ish (or whatever)% of possible bad, most of the bad in my mind is far from existential (bad actor controls AI, AI drives inequality to the point of serious unrest and war for a time etc.)

Its interesting to me that even within this AI safety discussion, a decent number of comments don't seem to have a bellcurve of outcomes in mind - many still seem to be looking at a binary between technoutopia and doom. I do recognise that its reasonable to think that those 2 are by far the most likely options though.

 

If this debate were about whether we should do anything to reduce AI risk, then I would strongly be on the side of doing something. I'm not an effective accelerationist. I think AI will probably be good, but that doesn't mean I think we should simply wait around until it happens. I'm objecting to a narrower point about whether we should view AI as an exception to the general rule that technology is good.

I think the answer to that question is how catastrophically bad tech of high enough capabilities could be, negative externalities or tech, and whether you include tech designed to cause harm like weapons. I have a very positive view of most technology but I’m not sure how a category that included all of those would look in the end due to the tail risks.

When you reason using probabilities, the more examples you have to reason over, the more likely your estimate is to be correct.

If you make a bucket of "all technology" - because like you say, the reference class for AI is fuzzy - you consider the examples of all technology.

I assume you agree that the net EV of "all technology" is positive.

The narrower you make it "is AGI exactly like a self replicating bioweapon" you can choose a reference class that has a negative EV, but few examples. I agree and you agree, self replicating bioweapons are negative EV.

But...that kind of bucketing based on information you don't have is false reasoning. You're wrong. You don't have the evidence, yet, to prove AGIs reference class because you have no AGI to test.

Correct reasoning for a technology that doesn't even exist forces you to use a broad reference class. You cannot rationally do better? (question mark is because I don't know of an algorithm that lets you do better.)

Let me give an analogy. There are medical treatments where your bone marrow is replaced. These have terrible death rates, sometimes 66 percent. But if you don't get the bone marrow replacement your death rate is 100 percent. So it's a positive EV decision and you do not know the bucket you will fall in, [survivor| ! survivor]. So the rational choice is to say "yes" to the treatment and hope for the best. (ignoring pain experienced for simplicity)

The people that smile at your sadly - they are correct and the above is why. The reason they are sad is well, we as a species could in fact end up out of luck, but this is a decision we still must take.

All human scientific and decisionmaking is dependent on past information. If you consider all past information we have and apply it to the reference class of "AI" you end up with certain conclusions. (It'll probably quench, it's probably a useful tool, we probably can't stop everyone from building it).

You can't reason on unproven future information. Even if you may happen to be correct.

Curated and popular this week
Relevant opportunities