Written by LW user Richard_Ngo.
This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random
Excerpt from the post:
Ben Pace and I (Richard Ngo) recently did a public double crux at the Berkeley REACH on how valuable it is for people to go into AI policy and strategy work: I was optimistic and Ben was pessimistic. During the actual event, we didn't come anywhere near to finding a double crux on that issue. But after a lot of subsequent discussion, we've come up with some more general cruxes about where impact comes from.
I found Ben's model of how to have impact very interesting, and so in this post I've tried to explain it, along with my disagreements. Ben liked the goal of writing up a rough summary of our positions and having further discussion in the comments, so while he edited it somewhat he doesn’t at all think that it’s a perfect argument, and it’s not what he’d write if he spent 10 hours on it. He endorsed the wording of the cruxes as broadly accurate. (Full post on LW)
Please feel free to,
- Discuss in the comments
- Subscribe to the LessWrong for EA tag to be notified of future posts
- Tag other LessWrong reposts with LessWrong for EA.
- Recommend additional posts
Cool arguments on the impact of policy work for AI safety. I find myself agreeing with Richard Ngo’s support of AI policy given the scale of government influence and the uncertain nature of AI risk. Here’s a few quotes from the piece.
How AI could be influenced by policy experts:
Why EA specifically could succeed:
These opposing opinions are driven by different views on timelines, takeoff speeds, and sources of risk:
Thanks for sharing LW4EA! Particularly the AI safety stuff. It’s an act of community service.