I want to get a sense for what kinds of things EAs — who don't spend most of their time thinking about AI stuff — find most confusing/uncertain/weird/suspect/etc. about it.
By "AI stuff", I mean anything to do with how AI relates to EA.
For example, this includes:
- What's the best argument for prioritising AI stuff?, and
- How, if at all, should I factor AI stuff into my career plans?
but doesn't include:
- How do neural networks work? (except inasmuch as it's relevant for your understanding of how AI relates to EA).
Example topics: AI alignment/safety, AI governance, AI as cause area, AI progress, the AI alignment/safety/governance communities, ...
I encourage you to have a low bar for writing an answer! Short, off-the-cuff thoughts very welcome.
Here are some big and common questions I've received from early-stage AI Safety focused people, with at least some knowledge of EA.
They probably don't spend most of their time thinking about AIS, but it is their cause area of focus. Unsure if that meets the criteria you're looking for, exactly.