This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
Effective Altruism Forum
Topics
EA Forum
Login
Sign up
AI alignment
•
Applied to
Alignment & Capabilities: What's the difference?
1mo
ago
•
Applied to
Apply to a small iteration of MLAB to be run in Oxford
1mo
ago
•
Applied to
Origin and alignment of goals, meaning, and morality
1mo
ago
•
Applied to
[Crosspost] AI Regulation May Be More Important Than AI Alignment For Existential Safety
1mo
ago
•
Applied to
Why Is No One Trying To Align Profit Incentives With Alignment Research?
1mo
ago
•
Applied to
Summary of “The Precipice” (2 of 4): We are a danger to ourselves
1mo
ago
•
Applied to
What do we know about Mustafa Suleyman's position on AI Safety?
1mo
ago
•
Applied to
Safety-First Agents/Architectures Are a Promising Path to Safe AGI
2mo
ago
•
Applied to
3 levels of threat obfuscation
2mo
ago
•
Applied to
If AIs had subcortical brain simulation, would that solve the alignment problem?
2mo
ago
•
Applied to
AXRP Episode 24 - Superalignment with Jan Leike
2mo
ago
•
Applied to
"The Universe of Minds" - call for reviewers (Seeds of Science)
2mo
ago
•
Applied to
Carl Shulman on AI takeover mechanisms (& more): Part II of Dwarkesh Patel interview for The Lunar Society
2mo
ago
•
Applied to
[Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME)
2mo
ago
•
Applied to
I'm interviewing Jan Leike, co-lead of OpenAI's new Superalignment project. What should I ask him?
2mo
ago
•
Applied to
Train for incorrigibility, then reverse it (Shutdown Problem Contest Submission)
2mo
ago
•
Applied to
A simple way of exploiting AI's coming economic impact may be highly-impactful
2mo
ago
•
Applied to
Winners of AI Alignment Awards Research Contest
2mo
ago
•
Applied to
What new psychology research could best promote AI safety & alignment research?
2mo
ago