Yet another New York Times piece on AI. A non-AI safety friend sent it to me saying "This is the scariest article I've read so far. I'm afraid I haven't been taking it very seriously". I'm noting this because I'm always curious to observe what moves people, what's out there that has the power to change minds. In the past few months, there's been increasing public attention to AI and all sorts of hot and cold takes, e.g., about intelligence, consciousness, sentience, etc. But this might be one of the articles that convey the AI risk message in a language that helps inform and think about AI safety. 

The following is what stood out to me and made me think that it's time for philosophy of science to also take AI risk seriously and revisit the idea of scientific explanation given the success of deep learning:

I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.

“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”

Comments4
Sorted by Click to highlight new comments since: Today at 3:05 PM

Thanks for posting this. I think it's valuable to pay attention what drives shifts in perception.

I think Ezra Klein does a good job appealing to certain worldviews and making what may initially seem abstract feel more relatable. To me personally this piece was even more relatable than the one cited. 

In the piece you cited I think it's helpful that he:

  • calls out the "weirdness"
  • acknowledges the fact that people that work on z are likely to think z is very important but identifies as someone who does not work on z
  • doesn't go into the woods with theories

I think it's likely that a lot of the influence on public perception may be due to the fact that AI risks have entered the sphere of mainstream public discourse by way of several reputable publications over a pretty short time span. 

It might have increased recently, but even in 2015, one survey found 44% of the American public would consider AI an existential threat. It's now 55%.

David - thanks much for sharing the link to this Monmouth University survey. I urge everybody to have a look at it here (the same link you shared).

The survey looks pretty good methodologically: a probability-based national random sample of 805 U.S. adults, run by a reputable academic polling institute.

Two key results are worth highlighting, IMHO:

First, in response to the question "How worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race – very, somewhat, not too, or not at all worried?", 55% of people (as you mentioned) were 'very worried' or 'somewhat worried', and only 16% were 'not at all worried'.

Second, in response to the question "If computer scientists really were able to develop computers with artificial intelligence, what effect do you think this would have on society as a whole? Would it do more good than harm, more harm than good, or about equal amounts of harm and good?", 41% predicted more harm than good, and only 9% predicted more good than harm.

Long story short, the American public is already very concerned about AI X risk, and very dubious that AI will bring more benefits than costs. 

This contrasts markedly from the AI industry rhetoric/PR/propaganda that says everybody's excited about the wonderful future that AI will bring, and embraces that future with open arms.

Thanks for sharing @Geoffrey Miller and @DavidNash .

The results of this study are interesting for sure. Examining them more carefully makes me wonder if there is a significant priming effect in play  in both the 2015 and 2023 polls. This would not explain the 11 percent increase in participants worried about AI eventually posing a threat to the existence of the human race, though it potentially could have contributed, since there were some questions added to the 2023 poll that weren’t in the 2015 one.

Text

Description automatically generated

I was surprised  that in 2023, only 60% of participants “Had heard about A.I. products – such as ChatGPT – that can have conversations with you and write entire essays based on just a few prompts from humans?” (Question 26)

Looks like they used a telephone survey. I would imagine getting 805 random participants willing to answer a call from a (presumably) unrecognized number, much less partake in a 39 question phone survey would be rough these days. I don’t see any mention of incentivizing participation, though. 

Fascinating!