Hide table of contents

Most concerns about AI tend to boil down to:

  1. Loss of control to AI systems - What if AI were smarter than us and took over the world?
  2. Concentration of power - What if AI gave too much power to someone bad?

I'm surprised I haven't heard consideration of a third, more basic risk.

Will this technology be good?

Suppose you're in the Southern United States in 1793 and you believe that an important moral question—perhaps the most important moral question—is labor ethics of how cotton is processed. I suspect I don't need to go into detail about why you might think this.

Proceeding directly from this belief, an obviously good thing is, what if a machine could process cotton instead of people? Then at least the people who processed cotton wouldn't have to anymore.[1]

Imagine you work hard and, through grit and luck, it works! 

What is your confidence interval of how much better this makes life?

I hope your interval included negative numbers. For some reason, I never hear negative numbers for how much better life would be if AI could do lots of jobs for us.

This is in fact exactly what happened. Eli Whitney tried to reduce enslaved labor by creating a replacement machine, but it worked backward.[2]

Isn't this "centralization of power"?

No,

  1. The gains from the gin were no more centralized than previous production surplus
  2. The problem wasn't its effect on processing. It was the indirect effect on cotton growing. And that didn't change at all in centralization.
  3. It would hold true regardless of centralization. The problem wasn't centralization, it was more cotton growing.

What can AI do that is bad?

I don't have clear answers. But the cotton gin should be enough of a cautionary tale about economics and unintended consequences.

A start is "tricking people". The central motive of all economic activity is to modify human behavior, usually involving a step where they give you money. Training a net costs money. How will you make that money back? The net will help you change people's behavior to give you their money. Not every way to change someone's behavior is good.

Another angle is "social change". If there's an invention that turns dirty water into clean water cheaper, it can change the math of economic activity and that can bubble up to have indirect social changes. AI is more direct. Its main successes have been text and image: abstract goods whose sole purpose is to feed directly into people's brains. It can change how people make decisions, directly, and in fact already does, in ads.

You've probably already thought hard about applications, and whether they'll be good or bad. But a meta point is: AI applications tend to go straight through people's brains more than most innovations in, say, physics. And inventions that change people's minds are the scariest, the most volatile, and the most likely to have unexpected effects.

  1. ^

    AI analog: what if a machine could do lots of different jobs now done by people?

  2. ^

    https://en.wikipedia.org/wiki/Eli_Whitney#:~:text=Whitney%20believed%20that%20his%20cotton,the%20end%20of%20southern%20slavery.

6

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 6:04 PM

While it is my belief that the there is some wider context missing from certain aspects of this post (e.g., what sorts of AI progress are we talking about, perhaps strong AI or transformative AI?- this makes a difference), the analogy does a fair job at illustrating that the intention to use advanced AI to engender progress (beneficial outcomes for humanity) might have unintended and antithetical effects instead. This seems to actually encapsulate the core idea of AI safety / alignment, roughly that a system which is capable of engendering vasts amounts of scientific and economic gains for humanity need not be (and is unlikely to be) aligned with human values and flourishing by default and thus is capable of taking actions that may cause great harm to humanity.  The Future of Life's page on the Benefits and Risk of AI comments on this:

The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult.

Needless to comment on this further, but the worst outcomes possible from unaligned AI seem extremely likely to exceed unemployment in severity. 

I don't think it's just about "methods". There was nothing wrong about the Cotton Gin's "methods", or its end goal. The problem was that other actors adapted their behavior to its presence, and, in fact, it's not hard to see that this was very likely in retrospect (not that they could have predicted it, but if we "rerolled" the universe, it would probably happen again).

More from 423175
Curated and popular this week
Relevant opportunities