Hi Sam. I'm curious to what extent people in the field think risk communication could be beneficial for reducing AI risk. In other words, are there any aspects of AI risk that could be mitigated by large numbers of people having accurate knowledge about them? Or is AI risk communication largely irrelevant to the problem? Or is it more likely to increase rather than decrease AI risk (perhaps by means of some type of infohazard)?
It is inspiring! This was such a fun piece to write because of that. I'm glad you enjoyed it!
Feedback is welcome! Everything in this post is open to change.
Interesting thoughts, thanks for your input! I'll think about how to incorporate the feedback.
Fantastic comments, thank you! I included the bit about personal fulfillment because it's such an important component of being able to sustain an effective career long term, but in retrospect I was so focused on including as many EA ideas as I could that I didn't notice how out of place that sentiment is at that point in the story. I removed both that sentence and the one about more important causes, and I added a variant of your suggested replacement sentence.
Oh that's awesome. Thank you!
I'm glad to hear it. And sure thing!
Thanks, I'm glad you liked it!