I want to get a sense for what kinds of things EAs — who don't spend most of their time thinking about AI stuff — find most confusing/uncertain/weird/suspect/etc. about it.
By "AI stuff", I mean anything to do with how AI relates to EA.
For example, this includes:
- What's the best argument for prioritising AI stuff?, and
- How, if at all, should I factor AI stuff into my career plans?
but doesn't include:
- How do neural networks work? (except inasmuch as it's relevant for your understanding of how AI relates to EA).
Example topics: AI alignment/safety, AI governance, AI as cause area, AI progress, the AI alignment/safety/governance communities, ...
I encourage you to have a low bar for writing an answer! Short, off-the-cuff thoughts very welcome.
I've since gotten a bit more context, but I remember feeling super confused about these things when first wondering how much to focus on this stuff:
(If people are curious, the resources I found most helpful on these were: this, this, and this for 1.1, the former things + longtermism arguments + The Precipice on non-AI existential risks for 1.2, 1.1 stuff & stuff in this syllabus for 1.3 and 3, ch. 2 of Superintelligence for 1.4, this for 1.6, the earlier stuff (1.1 and 3) for 4, and various more scattered things for 1.5 and 2.)