Near-term AI ethics

Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car,cars, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety,car, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety, and autonomous weapons.weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics refers to current ethicalis the study of the moral questions arising from issues with existingin AI systems. Some topics in near-that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety, and autonomous weapons. By contrast, long-term AI ethics are relevant studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the long-term problemimplications of artificial general intelligence or transformative artificial intelligence.[1][2]

Further reading

Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI alignmentethics and society., Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 138–143.

  1. ^

    Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 138–143.

  2. ^

    Brundage, Miles (2017) Guide to working in AI policy and strategy, 80,000 Hours, June 7.

I don't quite understand what is here meant by "near-term AI ethics". Is it something like "the ethical issues posed by AI when only its effects on the present population are taken into account"? 

4RyanCarey1mo
If you look at "Beyond near-and long-term: Towards a clearer account of research priorities in AI ethics and society", you get the following description:
6Pablo1mo
Great, thanks. I expanded the entry.

Near-Termterm AI Ethicsethics refers to current ethical issues with existing AI systems. Some topics in near-term AI ethics are relevant to the long-term problem of AI alignment.