99 percent of the prospective community members who would have a significant impact in their lives are in low and middle income countries.
I think this is false. According to gapminder (first google hit for the question), 16% of world population lives in high income countries (USA alone has >4%). I do not think that these countries only have 1% of the people who would have a significant impact in their lives.
Wow, I am wondering whether to engage further or just let your reply stand as a testament to your "thoughtfulness". Doubling down on stereotyping and mischaracterizing people... great job! (sorry for the sarcasm but I am STILL surprised when I encounter this type of behavior in the EA forum, probably a sign of my naivety...).
I think this is insufficiently kind.
If a point of the article was to get the community to engage with the arguments for degrowth, the author should have engaged with the things that EA has written about degrowth. For example, https://www.vox.com/future-perfect/22408556/save-planet-shrink-economy-degrowth or https://forum.effectivealtruism.org/posts/XQRoDuBBt98wSnrYw/the-case-against-degrowth
Something feels off about this Article. It is not really discussed what the AI workers could want or believe, or how to convince them that slowing down AI would delay or aviod extinction of humanity.
Are you assuming a world where the risk of extinction from AGI is widely accepted among AI workers? (In this case, why are they still working on the thing that potentially kills everyone?) If the workers do not believe in (large) risks of extinction from AI, how do you want to recruit them into your union? This seems hard if you want to be honest about the main goal of the union?
I feel a bit uneasy that EAs should put in a lot of effort into a survey (both the survey designers and takers) just because someone made up something at some point. Maybe asking the people who you'd expect to know better, why they believe what they believe?
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I'll note that I stopped reading the linked article after "Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do "narrow EA" or "global EA".
A reason that is missing from the "contra" list: You could stay at a higher salary and donate the difference to a more cost-effective org than the one you work for.
I would expect that most people who work in EA do not work for the org that they consider to have the highest marginal impact for an additional dollar (although certainly some do).
Accepting a lower salary can be more tax-efficient than donating if the donation is not tax-deductible. But if you think that cost-effectiveness follows a power law, then its quite possible that there is an org is more than twice as cost-effectiveness than your current employer.
I feel like this does not really address the question?
A possible answer to Rockwell's question might be "If we have 15000 scientists working full-time on AIS, then I consider AIS to no longer be neglected" (this is hypothetical, I do not endorse it. And its also not as contextualized as Rockwell would want it).
But maybe I am interpreting the question too literally and you are making a reasonable guess what Rockwell wants to hear.
I think most probabilistic estimates are subjective probability estimates. There are no complicated math models behind them usually.
Some people do make models, but then make subjective probability estimates. The math is typically not that complicated for these models, often just multiplying different probabilities together (which is imo not a good class of models for this kind of problem).
My guess would be that even some of the people who make models have different probability estimates for human extinction than the one that the model spits out, because they realize that their models have flaws and try to correct for that.
Am I understanding correctly that you wish more EAs would support a military takeover of a poorly governed African country?
If so, I would like to state that EAs putting significant resources into a military takeover of an African country is a bad idea. I might be biased here due to my pro-democracy views, but I would expect that life is on average more unpleasant in a military regime around the globe, not just in the west. You would need to be quite lucky that the military leaders care enough about the country and are competent in running it.
It is fine to do a cost comparison of holding elections and running a military regime, but I have high priors for this case and would prefer that cost effectiveness analyses are run for less obvious questions.