AI alignment

Leo (+16/-40)
Pablo (+40/-135)
Leo (+40/-16)
Leo (+16)
Pablo (+33)
Pablo (+12)
Pablo (+7/-24)
Pablo (+201)
Leo (+92/-120)
Pablo (+299)

The AI alignment tag is used for posts that discuss aligning research on how to align AI systems with human interests, and for meta-discussion about whether this goal is worthwhile, achievable, etc.or moral goals.

Christiano, Paul (2020) Current work in AI alignment, Effective Altruism Forum, April 3.

Shah, Rohin (2020) What’s been happening in AI alignment?, Effective Altruism Forum, July 29.

Further reading

Christiano, Paul (2020) Paul Christiano: current work in AI alignment, Effective Altruism, April 3.

Shah, Rohin (2020) What’s been happening in AI alignment?, Effective Altruism, July 29.

80,000 Hours rates AI alignment a "highest priority area": a problem at the top of their ranking of global issues assessed by importance, tractability and neglectedness (80,000 Hours 2021).

Bibliography

80,000 Hours (2021) Our current list of the most important world problems[1], 80,000 Hours.

  1. ^

Evaluation

80,000 Hours rates AI alignment a "highest priority area": a problem at the top of their ranking of global issues assessed by importance, tractability and neglectedness (80,000 Hours 2021).

Bibliography

80,000 Hours (2021) Our current list of the most important world problems, 80,000 Hours.

Load more (10/27)