My mental model of the rationality community (and, thus, some of EA) is "lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides."
Given this, I'm pessimistic that, in our current setup, we're able to attract the absolute "best and brightest and also most ethical and also most epistemically rigorous people" that exist on Earth.
Ignoring for a moment that it's just hard to find people with all of those qualities combined... what about finding people with actual-top-percentile any of those things?
The most "ethical" (like professional-ethics, personal integrity, not "actually creates the most good consequences) people are probably doing some cached thing like "non-corrupt official" or "religious leader" or "activist".
The most "bright" (like raw intelligence/cleverness/working-memory) people are probably doing some typical thing like "quantum physicist" or "galaxy-brained mathematician".
The most "epistemically rigorous" people are writing blog posts, which may or may not even make enough money for them to do that full-time. If they're not already part of the broader "community" (including forecasters and I guess some real-money traders), they might be an analyst tucked away in government or academia.
A broader problem might be something like: promote EA --> some people join it --> the other competent people think "ah, EA has all those weird problems handled, so I can keep doing my normal job" --> EA doesn't get the best and brightest.
(This was originally a comment, but I think it deserves more in-depth discussion.)
I strongly, strongly, strongly disagree with this decision.
Per my own values and style of communication, I think that welcoming people like sapphire or Sabs who a) are or can be intensely disagreeable, and b) have points worth sharing and processing, is strongly on the side of worth doing, even if c) they make other people uncomfortable, and d) even if they occasionally misfire, and even if they are wrong most of the time, as long as the expected value of the stuff they say remains high.
In particular, I think that doing so is good for arriving at correct beliefs and for becoming stronger, which I value a whole lot. It is the kind of communication which we use on my forecasting group, where the goal is to arrive at correct beliefs.
I understand that the EA Forum moderators may have different values, and that they may want to make the forum a less spiky place. Know that this has the predictable consequence of losing a Nuño, and it is part of the reason why I've bothered to create a blog and added comments to it in a way which I expect to be fairly uncensorable[1].
Separately, I do think it is the case that EA "simps" for tech billionaires[2]. An answer I would have preferred to see would be a steelmanning of why that is good, or an argument of why this isn't the case.
Uncensorable by others: I am hosting the blog on top of nja.la and the comments on my own servers. Not uncensorable by me; I can and will censor stuff that I think is low value by my own utilitarian/consequentialist lights.
Less sure of AI companies, but you could also make the case, e.g., 80kh does recommend positions at OpenAI (<https://jobs.80000hours.org/?query=OpenAI>)