This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
Effective Altruism Forum
Topics
EA Forum
Login
Sign Up
AI alignment
•
Applied to
The Tree of Life: Stanford AI Alignment Theory of Change
by
Gabriel Mukobi
at
6h
•
Applied to
Follow along with Columbia EA's Advanced AI Safety Fellowship!
by
RohanS
at
20h
•
Applied to
(Even) More Early-Career EAs Should Try AI Safety Technical Research
by
levin
at
2d
•
Applied to
Quick survey on AI alignment resources
by
Guy Raveh
at
2d
•
Applied to
AI safety university groups: a promising opportunity to reduce existential risk
by
mic
at
2d
•
Applied to
Announcing the Harvard AI Safety Team
by
Alexander Davies
at
2d
•
Applied to
$500 bounty for alignment contest ideas
by
Akash
at
3d
•
Applied to
What success looks like
by
mariushobbhahn
at
4d
•
Applied to
7 essays on Building a Better Future
by
Jamie_Harris
at
8d
•
Applied to
A Quick List of Some Problems in AI Alignment As A Field
by
NicholasKross
at
11d
•
Applied to
Key Papers in Language Model Safety
by
mic
at
12d
•
Applied to
On Deference and Yudkowsky's AI Risk Estimates
by
Vaidehi Agarwalla
at
13d
•
Applied to
‘Force multipliers’ for EA research
by
Craig Drayton
at
15d
•
Applied to
FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community
by
Darren McKee
at
16d
•
Applied to
A central AI alignment problem: capabilities generalization, and the sharp left turn
by
Nathan Young
at
17d
•
Applied to
Align Humans to Rationality?
by
Scytale
at
18d
•
Applied to
Expected ethical value of a career in AI safety
by
Jordan Taylor
at
19d
•
Applied to
Steering AI to care for animals, and soon
by
evelynciara
at
19d
•
Applied to
AGI Safety Communications Initiative
by
Ines
at
21d