Effective Altruism Forum Logo
Effective Altruism Forum
Why Not Try Build Safe AGI?
Effective Altruism Forum Logo
EA Forum

Why Not Try Build Safe AGI?

Dec 24, 2022 by Remmelt

Copy-pasting from my one-on-ones with AI Safety researchers:

17
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
Remmelt
· 3y ago
3
3
24
List #1: Why stopping the development of AGI is hard but doable
Remmelt
Remmelt
· 2y ago
2
2
3
List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well... coordinating as humans with AGI coordinating to be aligned with humans
Remmelt
Remmelt
· 2y ago
0
0
6
List #3: Why not to assume on prior that AGI-alignment workarounds are available
Remmelt
Remmelt
· 2y ago
0
0