This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
Effective Altruism Forum
Topics
EA Forum
Login
Sign Up
Future Fund Worldview Prize
•
Applied to
AGI Isn’t Close - Future Fund Worldview Prize
by
Toni MUENDEL
at
1mo
•
Applied to
What would power-seeking, misaligned AGI actually do?
by
Kiel Brennan-Marquez
at
2mo
•
Applied to
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
by
[anonymous]
at
2mo
•
Applied to
Disagreement with bio anchors that lead to shorter timelines
by
Lizka
at
2mo
•
Applied to
Will AI Worldview Prize Funding Be Replaced?
by
Amber Dawn
at
2mo
•
Applied to
How likely are malign priors over objectives? [aborted WIP]
by
Lizka
at
3mo
•
Applied to
When can a mimic surprise you? Why generative models handle seemingly ill-posed problems
by
Amber Dawn
at
3mo
•
Applied to
"Develop Anthropomorphic AGI to Save Humanity from Itself" (Future Fund AI Worldview Prize submission)
by
ketanrama
at
3mo
•
Applied to
"AI predictions" (Future Fund AI Worldview Prize submission)
by
ketanrama
at
3mo
•
Applied to
"AGI timelines: ignore the social factor at their peril" (Future Fund AI Worldview Prize submission)
by
ketanrama
at
3mo
•
Applied to
Is AI forecasting a waste of effort on the margin?
by
Emrik
at
3mo
•
Applied to
Why do we post our AI safety plans on the Internet?
by
Peter S. Park
at
3mo
•
Applied to
AI X-Risk: Integrating on the Shoulders of Giants
by
TD_Pilditch
at
3mo
•
Applied to
Worldview iPeople - Future Fund’s AI Worldview Prize
by
Amber Dawn
at
3mo
•
Applied to
Why some people believe in AGI, but I don't.
by
cveres
at
3mo
•
Applied to
Intent alignment should not be the goal for AGI x-risk reduction
by
johnjnay
at
3mo
•
Applied to
What does it take to defend the world against out-of-control AGIs?
by
Steven Byrnes
at
3mo