I would just like to point out three "classical EA" arguments for taking recommender systems very seriously.
1) The dangerousness of AGI has been argued to be orthogonal from the purpose of AGI, as illustrated by the paperclip maximizers. If you accept this "orthogonality thesis" and if you are concerned about AGI, then you should be concerned about the most sophisticated maximization algorithms. Recommender systems seem to be today's most sophisticated maximization algorithms (a lot more money and computing power has been invested ...
I guessed the post strongly insisted on the scale and neglectedness of short-term AI alignment. But I can dwell more on this. There are now more views on YouTube than searches on Google, 70% of which are results of recommendation. And a few studies show (cited here) suggest that the influence of repeated exposure to some kind of information has a strong effect on beliefs, preferences and habits. Since this has major impacts on all other EA causes, I'd say the scale of the problem is at least that of any other EA cause.
I believe that alignment is extre...
The importance of ethics in YouTube recommendation seems to have grown significantly over the last two years (see this for instance). This suggests that there are pressures both from outside and inside that may be effective in making YouTube care about recommending quality information.
Now, YouTube's effort seems to have been mostly about removing (or less recommending) undesirable contents so far (though as an outsider it's hard for me to say). Perhaps they can be convinced to also recommend more desirable contents too.
It's unfortunately very hard to quantify the impact of recommender systems. But here's one experiment that may update your prior on the effectiveness of targeted video recommendations.
In 2013, Facebook did a large-scale experiment where they tweaked their newsfeeds. For some of their users, they removed 10% of posts with negative contents. For others, they removed 10% of posts with positive contents. And there was also a control group. After only one week, they observed a change in users' behaviors: the first group posted more positive cont...
Well, you were more than right to do so! You (and others) have convinced us. We changed the title of the book :)
This is a fair point. We do not discuss much the global improvement of the world. I guess that we try to avoid upsetting those who have a negative vision of AI so far.
However, Chapter 5 does greatly insist on the opportunities of (aligned) AIs, in a very large number of fields. In fact, we argue that there is a compelling argument to say that fighting AI progress is morally wrong (though, of course, there is the equally compelling flip-side of the argument if one is concerned about powerful AIs...).
We should probably add something about the personification...
This is a good point. The book focuses a lot on research questions indeed.
We do see value in many corporations discussing AI ethics. In particular, there seems to be a rise of ethical discussions within the big tech companies, which we hope to encourage. In fact, in Chapter 7, we urge AI companies like Google and Facebook to, not only take part of the AI ethics discussion and research, but to actively motivate, organize and coordinate it, typically by sharing their AI ethics dilemmas and perhaps parts of their AI codes. In a sense, they already started to ...
The book will be published by EDP Sciences. They focus a lot on textbooks. But they also work on outreach books. I published my first book with them on Bayesianism.
We hope to reach out to all sorts of people who are intrigued by AI but do not have any background on the topic. We also hope that more technical readers will be interested in the book to have an overview on AI Safety.
I should point out that I run a YouTube channel, whose audience will likely be the base audience of the book too.
Thanks! This is reassuring. I met someone last week who does his PhD in post-quantum cryptography and he did tell me about an ongoing competition to set the standards of such a cryptography. The transition seems on its way!
Great post! It's very nice to see this problem being put forward. Here are a few remarks.
It seems to be that the scale of the problem may be underestimated by the post. Two statistics that suggest this are the fact that there are now more views on YouTube than searches on Google, and that 70% of them are YouTube recommendation. Meanwhile, psychology stresses biases like availability bias or mere exposure effects that suggest that YouTube strongly influences what people think, want and do. Here are a few links about this:
NB: I've edited the sentence to clarify what I meant.
The argument here is more that recommender systems are maximization algorithms, and that, if you buy the "orthogonality thesis", there is no reason to think that there cannot go AGI. In particular, you should not judge the capability of an algorithm by the simplicity of the task it is given.
Of course, you may reject the orthogonality thesis. If so, please ignore the first argument.