So much talk in AI and EA seems to be about AI alignment. But of course, even if we build aligned AI, if it's open-sourced, then a bad actor could tweak the AI to do catastrophic harm to all of us. Isn't this possibility even worse than if AI is concentrated in the hands of a few individuals, even with bad intentions (assuming those intentions are greed, but not outright sociopathy)? I know Tristan Harris has talked about this dichotomy, and there's the Bostrom Vulnerable Worlds Hypothesis, but I don't see how there's any getting around the fact that just one sophisticated open-source LLM could wipe out everyone, and that would be worse than anything possible with a closed-source AI. But I'd love to hear the alternative arguments (i.e., that's why I'm posting), so please do share!