This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
Effective Altruism Forum
Topics
EA Forum
Login
Sign up
Future Fund Worldview Prize
•
Applied to
Announcing the Open Philanthropy AI Worldviews Contest
2y
ago
•
Applied to
A $1 million dollar prize to incentivize global participation in a video competition to crowdsource an attractive plan for the long-term development of civilization
2y
ago
•
Applied to
AGI Isn’t Close - Future Fund Worldview Prize
2y
ago
•
Applied to
Cooperation, Avoidance, and Indifference: Alternate Futures for Misaligned AGI
2y
ago
•
Applied to
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
2y
ago
•
Applied to
Disagreement with bio anchors that lead to shorter timelines
2y
ago
•
Applied to
Will AI Worldview Prize Funding Be Replaced?
2y
ago
•
Applied to
How likely are malign priors over objectives? [aborted WIP]
2y
ago
•
Applied to
When can a mimic surprise you? Why generative models handle seemingly ill-posed problems
2y
ago
•
Applied to
"Develop Anthropomorphic AGI to Save Humanity from Itself" (Future Fund AI Worldview Prize submission)
2y
ago
•
Applied to
"AI predictions" (Future Fund AI Worldview Prize submission)
2y
ago
•
Applied to
"AGI timelines: ignore the social factor at their peril" (Future Fund AI Worldview Prize submission)
2y
ago
•
Applied to
Is AI forecasting a waste of effort on the margin?
2y
ago
•
Applied to
Why do we post our AI safety plans on the Internet?
2y
ago
•
Applied to
AI X-Risk: Integrating on the Shoulders of Giants
2y
ago
•
Applied to
Worldview iPeople - Future Fund’s AI Worldview Prize
2y
ago
•
Applied to
Why some people believe in AGI, but I don't.
2y
ago
•
Applied to
Intent alignment should not be the goal for AGI x-risk reduction
2y
ago