Hide table of contents

A common line of argument for AI alignment being an issue is that unaligned AI systems deployed in the real world have already caused harm to people or society.[1] However, many of the cited examples are disputed – for example, a paper finds no evidence that YouTube's recommendation algorithm radicalizes users, but the study has been criticized for not studying how the algorithm interacts with individual, logged-in users.[2] What evidence is there that unaligned AI systems cause real-world harm, and how strong/consistent is it?

I know that the Center for Humane Technology has been compiling a Ledger of Harms documenting the various harms allegedly caused by digital technologies, but I'm concerned that it selects for positive results (i.e. results confirming that a technology causes harm).[3] Ideally, I would like to see a formal meta-analysis that incorporates both positive and negative results.


  1. See Aligning Recommender Systems as Cause Area. ↩︎

  2. Feuer, Will. "Critics slam study claiming YouTube’s algorithm doesn’t lead to radicalization." CNBC, Dec. 30, 2019. Accessed July 21, 2020. ↩︎

  3. According to the submission form, they only include studies that "someone could use ... to create (or catalyze the creation of) a more humane version" of the technology in question. ↩︎

31

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

I don’t think the evidence is very good; I haven’t found it more than slightly convincing. I don’t think that the harms of current systems are a very good line of argument for potential dangers of much more powerful systems.

Curated and popular this week
Relevant opportunities