I don't find accusations of fallacy helpful here. The author's say in the abstract explicitly that they estimated the probability of each step conditional on the previous ones. So they are not making a simple, formal error like multiplying a bunch of unconditional probabilities whilst forgetting that only works if the probabilities are uncorrelated. Rather, you and Richard Ngo think that they're estimates for the explicitly conditional probabilities are too low, and you are speculating that this is because they are still really think of the unconditional probabilities. But I don't think "you are committing a fallacy" is a very good or fair way to describe "I disagree with your probabilities and I have some unevidenced speculation about why you are giving probabilities that are wrong".
"A fraudulent charity" does not sound to me much like "a charity that knowingly used a mildly overoptimistic figure for the benefits of one of its programs even after admitting under pressure it was wrong'. Rather, I think the rhetorical force of the phrase comes mostly from the fact that to any normal English speaker it conjures up the image of a charity that is a scam in the sense that it is taking money, not doing charitable work with it, and instead just putting it into the CEO's (or whoever's) personal bank account. My feeling on this isn't really effected by whether the first thing meets the legal definition of fraud, probably it does. My guess is that many charities that almost no one would describe as "fraudulent organizations" have done something like this or equivalently bad at some point in their histories, probably including some pretty effective ones.
Not that I think that means Singeria have done nothing wrong. If they agree the figure is clearly overoptimistic they should change it. Not doing so is deceptive, and probably it is illegal. But I find it a bit irritating that you are using what seems to me to be somewhat deceptive rhetoric whilst attacking them for being deceptive.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrow's comment saying NL still seem bad: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I don't think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
"Because once a country embraces Statism, it usually begins an irreversible process of turning into a "shithole country", as Trump himself eloquently put it. "
Ignoring tiny islands (some of them with dubious levels of independence from the US), the 10 nations with the largest %s of GDP as government revenue include Finland, France, Belgium and Austria, although, also, yes, Libya and Lesotho. In general, the top of the list for government revenue as % of GDP seems to be a mixture of small islands, petro states, and European welfare state democracies, not places that are particularly impoverished or authoritarian: https://en.wikipedia.org/wiki/List_of_countries_by_government_spending_as_percentage_of_GDP#List_of_countries_(2024)
Meanwhile the countries with the low levels of government revenue as a % of GDP that aren't currently having some kind of civil war are places like Bangladesh, Sri Lanka, Iran and (weirdly) Venezuela.
This isn't a perfect proxy for "statism" obviously, but I think it shows that things are more complicated than simplistic libertarian analysis would suggest. Big states (in purely monetary) seem to often be a consequence of success. Maybe they also hold back further success of course, but countries don't seem to actively degenerate once they arrive (i.e. growth might slow, but they are not in permanent recession.)
I'd distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/or publicly admittted to a degree of focus on AI they've always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Phil's website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isn't much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
Presumably there are at least some people who have long timelines, but also believe in high risk and don't want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. It's hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They don't have to be raging dogmatists to worry about this happening again, and it's reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*I'm less surely merely catastrophic biorisk from human misuse is low sadly.