Effective Altruism Forum Logo
Effective Altruism Forum
HPS FOR AI SAFETY
Effective Altruism Forum Logo
EA Forum

Login
Sign Up

HPS FOR AI SAFETY

Jul 13, 2022 by Eleni_A

A collection of AI safety posts from the history and philosophy of science (HPS) point of view. 

5An Epistemological Account of Intuitions in Science
Eleni_A
7mo
0
17Alignment is hard. Communicating that, might be harder
Eleni_A
7mo
1
4"Normal accidents" and AI systems
Eleni_A
7mo
1
2Beware of proxies
Eleni_A
7mo
0
6 It's (not) how you use it
Eleni_A
6mo
3
18Alignment's phlogiston
Eleni_A
7mo
1
5Who ordered alignment's apple?
Eleni_A
7mo
0
18There is no royal road to alignment
Eleni_A
6mo
2
5Against the weirdness heuristic
Eleni_A
5mo
0
13Cognitive science and failed AI forecasts
Eleni_A
4mo
0
16Emerging Paradigms: The Case of Artificial Intelligence Safety
Eleni_A
2mo
0