I want to get a sense for what kinds of things EAs — who don't spend most of their time thinking about AI stuff — find most confusing/uncertain/weird/suspect/etc. about it.
By "AI stuff", I mean anything to do with how AI relates to EA.
For example, this includes:
- What's the best argument for prioritising AI stuff?, and
- How, if at all, should I factor AI stuff into my career plans?
but doesn't include:
- How do neural networks work? (except inasmuch as it's relevant for your understanding of how AI relates to EA).
Example topics: AI alignment/safety, AI governance, AI as cause area, AI progress, the AI alignment/safety/governance communities, ...
I encourage you to have a low bar for writing an answer! Short, off-the-cuff thoughts very welcome.
Hi Sam. I'm curious to what extent people in the field think risk communication could be beneficial for reducing AI risk. In other words, are there any aspects of AI risk that could be mitigated by large numbers of people having accurate knowledge about them? Or is AI risk communication largely irrelevant to the problem? Or is it more likely to increase rather than decrease AI risk (perhaps by means of some type of infohazard)?