Thinking, writing, and tweeting from Berkeley California. Previously, I ran programs at the Institute for Law & AI, worked on the one-on-one advising team at 80,000 Hours in London and as a patent litigator at Sidley Austin in Chicago.
I agree that people should be doing a better job here. As you say, you can just explain what you're doing and articulate your confidence in specific claims.
The thing you want to track is confidence*importance. MacAskill and Ball do worse than Piper here. Both of them were making fundamental claims about their primary projects/areas of expertise, and all claims in those two areas are somewhat low confidence and people adjust their expectations that.
MacAskill and Ball both have defenses too. In MackAskill's case, he's got a big body of other work that makes it fairly clear DGB was not a comprehensive account of his all-things-considered views. It'd be nice to clear up the confusion by stating how he resolves the tension between different works of his, but the audience can also read them and resolve the tension for themselves. The specific content of William MacAskill's brain is just not the thing that matters and its fine for him to act that way as long as he's not being systematically misleading.
Ball looks worse, but I wouldn't be surprised if he alluded to his true view somewhere public and he merely chose not to emphasize it so as to better navigate an insane political environment. If not, that's bad, but again there's a valid move of saying "here are some rationales for doing X" that doesn't obligate you to disclose the ones you care most about, though this is risky business and a mild negative update on your trustworthiness.
Many creators act as though Youtube's algorithm disfavors content that refers to graphic acts of sex and violence, i.e., bleeping words like 'kill' or 'suicide' or referring to these in very circuitous ways. I would guess these are incomplete methods of avoidance and that YT tries to keep up by detecting these workarounds. Seems like a potential issue for the MechaHitler video.
Good characterization; I should have watched the video. Seems like she may be unwilling to consider that the weird Silicon Valley stuff is correct, but explicitly says she's just raising the question of motivated reasoning.
The "writing scifi with your smart friends" is quite an unfair characterization, but fundamentally on us to counter. I think it will all turn on whether people find AI risk compelling.
For that, there's always going to be a large constituency scoffing. There's a level at which we should just tolerate that, but we're still at a place where communicating the nature of AI risk work more broadly and more clearly is important on the margin.
A 10% chance of transformative AI this decade justifies current EA efforts to make AI go well. That includes the opportunity costs of that money not going to other things in the 90% worlds. Spending money on e.g. nuclear disarmament instead of AI also implies harm in the 10% of worlds where TAI was coming. Just calculating the expected vale of each accounts for both of these costs.
It's also important to understand that Hendrycks and Yudkowsky were simply describing/predicting the geopolitical equilibrium that follows from their strategies, not independently advocating for the airstrikes or sabotage. Leopold is a more ambiguous case, but even he says that the race is already the reality, not something he prefers independently. I also think very few "EA" dollars are going to any of these groups/individuals.
I agree this is quite bad practice in general, though see my other comment for why I think these are not especially bad cases.
A central error in these cases is assuming audiences will draw the wrong inferences from your true view and do bad things because of that. As far as I can tell, no one has full command of the epistemic dynamics here to be able to say that with confidence and then act on it. If you aren't explicit and transparent about your reasoning, people can make any number of assumptions, others can poke holes in your less-than-fully-endorsed claim and undermine the claim or undermine your credibility and people can use that to justify all kinds of things.
You need to trust that your audience will understand your true view or that you can communicate it properly. Any alternative assumption is speculation whose consequences you should feel more, not less, responsible for since you decided to mislead people for the sake of the consequences rather than simply being transparent and letting the audience take responsibility for how they react to what you say.
I think people who do the bad version of this often have this ~thought experiment in mind: "my audience would rather I tell them the thing that makes their lives better than the literal content of my thoughts." As a member of your audience, I agree. I don't, however, agree with the subtly altered, but more realistic version of the thought experiment: "my audience would rather I tell them the thing that I think makes their lives better than the literal content of my thoughts."