It doesn't follow from there being no clear definition of something that there aren't clear positive and negative cases of it, only that it's blurry at the boundaries. For example, suppose the only things that existed were humans, rocks, and lab grown human food. There still wouldn't be a clear definition of "conscious", but it would be clear only humans were conscious, since lab grown meat and veg and rocks clearly don't count on any intepretation of 'consciousness'. Maybe all mites obviously don't count too. I agree with you that BB can't just assume that about mites though, and needs to provide an argument.
Presumably there are at least some people who have long timelines, but also believe in high risk and don't want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. It's hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They don't have to be raging dogmatists to worry about this happening again, and it's reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*I'm less surely merely catastrophic biorisk from human misuse is low sadly.
I don't find accusations of fallacy helpful here. The author's say in the abstract explicitly that they estimated the probability of each step conditional on the previous ones. So they are not making a simple, formal error like multiplying a bunch of unconditional probabilities whilst forgetting that only works if the probabilities are uncorrelated. Rather, you and Richard Ngo think that they're estimates for the explicitly conditional probabilities are too low, and you are speculating that this is because they are still really think of the unconditional probabilities. But I don't think "you are committing a fallacy" is a very good or fair way to describe "I disagree with your probabilities and I have some unevidenced speculation about why you are giving probabilities that are wrong".
"A fraudulent charity" does not sound to me much like "a charity that knowingly used a mildly overoptimistic figure for the benefits of one of its programs even after admitting under pressure it was wrong'. Rather, I think the rhetorical force of the phrase comes mostly from the fact that to any normal English speaker it conjures up the image of a charity that is a scam in the sense that it is taking money, not doing charitable work with it, and instead just putting it into the CEO's (or whoever's) personal bank account. My feeling on this isn't really effected by whether the first thing meets the legal definition of fraud, probably it does. My guess is that many charities that almost no one would describe as "fraudulent organizations" have done something like this or equivalently bad at some point in their histories, probably including some pretty effective ones.
Not that I think that means Singeria have done nothing wrong. If they agree the figure is clearly overoptimistic they should change it. Not doing so is deceptive, and probably it is illegal. But I find it a bit irritating that you are using what seems to me to be somewhat deceptive rhetoric whilst attacking them for being deceptive.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrow's comment saying NL still seem bad: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I don't think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
"Throwing soup at van gogh paintings have none of these attributes, so it is counter-productive."
What's the evidence it was counterproductive?