My mental model of the rationality community (and, thus, some of EA) is "lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides."
Given this, I'm pessimistic that, in our current setup, we're able to attract the absolute "best and brightest and also most ethical and also most epistemically rigorous people" that exist on Earth.
Ignoring for a moment that it's just hard to find people with all of those qualities combined... what about finding people with actual-top-percentile any of those things?
The most "ethical" (like professional-ethics, personal integrity, not "actually creates the most good consequences) people are probably doing some cached thing like "non-corrupt official" or "religious leader" or "activist".
The most "bright" (like raw intelligence/cleverness/working-memory) people are probably doing some typical thing like "quantum physicist" or "galaxy-brained mathematician".
The most "epistemically rigorous" people are writing blog posts, which may or may not even make enough money for them to do that full-time. If they're not already part of the broader "community" (including forecasters and I guess some real-money traders), they might be an analyst tucked away in government or academia.
A broader problem might be something like: promote EA --> some people join it --> the other competent people think "ah, EA has all those weird problems handled, so I can keep doing my normal job" --> EA doesn't get the best and brightest.
(This was originally a comment, but I think it deserves more in-depth discussion.)
I think this is entirely legitimate criticism. It's not at all clear to me that the net impact of Effective Altruism, from end to end, has even been positive. And if it has been negative, it has been negative BECAUSE of the impact the movement has had on AI timelines.
This should prompt FAR more reflection than I have seen within the community. People should be racking their brains for what went wrong and crying mea culpa. And working for OpenAI/Anthropic/etc/etc should not be seen as "effective". (Well, maybe now it's okay. Cat's out of the bag. But certainly being an AI capabilities researcher in 2020 did a lot of harm.)
As far as I can tell, the "Don't Build the Torment Nexus" community went ahead and built the Torment Nexus because it was both intellectually interesting and a path for individuals to acquire more power. Oops.
And to be clear, this pales in comparison - in my mind at least - to any harms done from the FTX debacle or the sexual abuse scandals. And that is not in any way a trivialization of either of those harms, both of which were also pretty severe. "Accelerate AI timelines" is just that bad.