Near-term AI ethics

Leo (+19/-43)
Pablo (+76/-63)
Pablo (+28/-5)
Pablo (-8)
Pablo (+5/-4)
Pablo (+4/-11)
Pablo (+8/-8)
Pablo (+1041/-107)
Leo (+40/-16)
Leo (+36)

Near-term AI ethics is the branch of AI ethics that studies the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. By contrast, long-Long-term AI ethics, by contrast, is the branch of AI ethics that studies the moral questions arising from issues that only arise, orexpected to arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics is the studybranch ofAI ethics that studies the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. By contrast, long-term AI ethics studies the questions from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car,cars, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety,car, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety, and autonomous weapons.weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Near-term AI ethics refers to current ethicalis the study of the moral questions arising from issues with existingin AI systems. Some topics in near-that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety, and autonomous weapons. By contrast, long-term AI ethics are relevant studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the long-term problemimplications of artificial general intelligence or transformative artificial intelligence.[1][2]

Further reading

Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI alignmentethics and society., Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 138–143.

  1. ^

    Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 138–143.

  2. ^

    Brundage, Miles (2017) Guide to working in AI policy and strategy, 80,000 Hours, June 7.

Load more (10/14)