Leo | v1.13.0Jul 22nd 2022 | (+19/-43) | ||
Pablo | v1.12.0Jul 12th 2022 | (+76/-63) | ||
Pablo | v1.11.0Jul 12th 2022 | (+28/-5) | ||
Pablo | v1.10.0May 27th 2022 | (-8) | ||
Pablo | v1.9.0May 27th 2022 | (+5/-4) | ||
Pablo | v1.8.0May 27th 2022 | (+4/-11) | ||
Pablo | v1.7.0May 25th 2022 | (+8/-8) | ||
Pablo | v1.6.0May 25th 2022 | (+1041/-107) | ||
Leo | v1.5.0Apr 12th 2022 | (+40/-16) | ||
Leo | v1.4.0Apr 12th 2022 | (+36) |
Near-term AI ethics is the branch of AI ethics that studies the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. By contrast, long-Long-term AI ethics, by contrast, is the branch of AI ethics that studies the moral questions arising from issues that only arise, orexpected to arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]
Near-term AI ethics is the studybranch ofAI ethics that studies the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. By contrast, long-term AI ethics studies the questions from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]
Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]
Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car,cars, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]
Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety,car, and autonomous weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]
Near-term AI ethics is the study of the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety, and autonomous weapons.weapons. By contrast, long-term AI ethics studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]
Near-term AI ethics refers to current ethicalis the study of the moral questions arising from issues with existingin AI systems. Some topics in near-that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving car safety, and autonomous weapons. By contrast, long-term AI ethics are relevant studies the questions arising from issues that only arise, or arise to a much greater extent, when AI is much more advanced than it is today. Examples include the long-term problemimplications of artificial general intelligence or transformative artificial intelligence.[1][2]
Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI alignmentethics and society., Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 138–143.
Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 138–143.
Brundage, Miles (2017) Guide to working in AI policy and strategy, 80,000 Hours, June 7.
AI alignment |
ethics of artificial intelligenceAI governance |governanceethics of artificial intelligence