This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
Effective Altruism Forum
HPS FOR AI SAFETY
EA Forum
Login
Sign Up
HPS FOR AI SAFETY
A collection of AI safety posts from the history and philosophy of science (HPS) point of view.
5
An Epistemological Account of Intuitions in Science
Eleni_A
7mo
0
17
Alignment is hard. Communicating that, might be harder
Eleni_A
7mo
1
4
"Normal accidents" and AI systems
Eleni_A
7mo
1
2
Beware of proxies
Eleni_A
7mo
0
6
It's (not) how you use it
Eleni_A
6mo
3
18
Alignment's phlogiston
Eleni_A
7mo
1
5
Who ordered alignment's apple?
Eleni_A
7mo
0
18
There is no royal road to alignment
Eleni_A
6mo
2
5
Against the weirdness heuristic
Eleni_A
5mo
0
13
Cognitive science and failed AI forecasts
Eleni_A
4mo
0
16
Emerging Paradigms: The Case of Artificial Intelligence Safety
Eleni_A
2mo
0