From about minute 32 of this Monday's 538 Podcast, Nate Silver & crew discuss AI politics and risk. As an outlet strongly aligned with "taking evidence and forecasting and political analysis seriously" I thought this was pretty interesting for a number of reasons, both in terms of arguments but also explicit discussion of EA community:
- Nate Silver making the point that this is much more important than other issues, that a 5% ex risk would be a really big deal
- Fairly detailed and somewhat accurate description of EA community
- Insularity of EA/AI risk community and difficulty of translating this to wider public
- Warning shots as the primary mechanism for actual risk being lower
- Rationality / EA community mentioned as good at identifying important things
- Difficulty of taking the issue fully seriously
- In other podcast episodes Nate also mentioned he will cover AI risk in some detail in his upcoming book
I think saying "even a small risk of extinction is worth prioritising!" sends the wrong message. It's (usually) aimed at assuring people that they can go on as normal. What should be communicated instead is something like:
Fair points. Any PR/market/messaging research needs to focus on the specific target market one's trying to reach; this could be the general public; it could be AI professionals; it could be EAs; it could be LessWrong rationalists. But any such issues can be turned into empirical research questions, if it might be helpful in guiding outreach.