Near-term AI ethics

Discuss the topic on this page. Here is the place to ask questions and propose changes.
3 comments, sorted by
New Comment

I don't quite understand what is here meant by "near-term AI ethics". Is it something like "the ethical issues posed by AI when only its effects on the present population are taken into account"? 

If you look at "Beyond near-and long-term: Towards a clearer account of research priorities in AI ethics and society", you get the following description:

As the phrase ‘near-term’ suggests, those who have written about the distinction tend to characterise near-term issues as those issues that society is already facing or likely to face very soon: Brundage [7] defines near-term issues as those society is “grappling with today” and Cave and ÓhÉigeartaigh [9] talk in terms of “immediate or imminent challenges” (p.5). Examples include concerns about data privacy [32, 34], algorithmic bias [19, 21], self-driving car accidents [4, 17], and ethical issues associated with autonomous weapons [1, 2].

Great, thanks. I expanded the entry.