Wow, this is a well-written, well-researched post. Thanks for putting it together!
Similarly, AI technologies do not share many of the features that have enabled coordinated bans on (weapons) technologies, making coordinated restraint difficult.
It would have been nice to read some in-text examples of the ban-enabling features lacking in AI. I clicked on the links you provided but there was too much information for it to be worth my time to go through them.
If you're interested in more resources to help you decide, may I recommend https://80000hours.org/It has a pretty good set of decision-making tips for someone like yourself. They also occasionally give out personalized career advice which might be of benefit.
I find it a bit frustrating that most critiques of AI Safety work or longtermism in general seem to start by constructing a strawman of the movement. I've read a ton of stuff by self-proclaimed long-termists and would consider myself one and I don't think I've ever heard anyone seriously propose choosing to decrease the risk of existential risk by .0000001 percent instead of lifting a billion people out of poverty. I'm sure people have, but it's certainly not a mainstream view in the community.And as others have rightly pointed out, there's a strong case to be made for caring about AI safety or engineered pandemics or nuclear war even if all you care about are the people alive today.The critique also does the "guilt by association" thing where it tries to make the movement bad by associating it with people the author knows are unpopular with their audience.
I have a quick question: if I want to have maximum impact to mitigate climate change, what's the best use of a small monthly donation? I was planning to pay the extra money to my utility company every month for renewable energy, but I figured there might be a more effective use of that same money. Any suggestions?